I'm trying to access the value of SECRETs sent to a GitHub Action, but I'm struggling. The values are returned as [FILTERED] every time, no matter what the key or the original value is.
I can access ENVIRONMENT VARIABLES without a problem, so I must be screwing up somewhere else.
Essentially, what I'm trying to do is send an ssh key to my action/container, but I get the same issue when sending any other key/value as a secret.
My (simplified) GitHub Action is as follows:
action "Test" {
uses = "./.github/actions/test"
secrets = [
"SSH_PRIVATE_KEY",
"SSH_PUBLIC_KEY",
]
env = {
SSH_PUBLIC_KEY_TEST = "thisisatestpublickey"
}
}
Dockerfile:
FROM ubuntu:latest
# Args
ARG SSH_PRIVATE_KEY
ARG SSH_PUBLIC_KEY
ARG SSH_PUBLIC_KEY_TEST
# Copy entrypoint
ADD entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
entrypoint.sh:
#! /bin/sh
SSH_PATH="$HOME/.ssh"
mkdir "$SSH_PATH"
touch "$SSH_PATH/known_hosts"
echo "$SSH_PRIVATE_KEY" > "$SSH_PATH/id_rsa"
echo "$SSH_PUBLIC_KEY" > "$SSH_PATH/id_rsa.pub"
echo "$SSH_PUBLIC_KEY_TEST" > "$SSH_PATH/id_rsa_test.pub"
cat "$SSH_PATH/id_rsa"
cat "$SSH_PATH/id_rsa.pub"
cat "$SSH_PATH/id_rsa_test.pub"
The output of those three cat commands is:
[FILTERED]
[FILTERED]
thisisatestpublickey
As you can see, I can get (and use) the value of the environment variables, but the secrets aren't being exposed.
Anyone got any clues?
Just to update this, I've also simply tried echoing out both the secrets without quotes in entrypoint.sh:
echo $SSH_PRIVATE_KEY
echo $SSH_PUBLIC_KEY
...and in the log, I see the full decrypted content of $SSH_PRIVATE_KEY (ie, the actual contents of my ssh key) while $SSH_PUBLIC_KEY still returns [FILTERED].
So, I can assume that we are able to see the contents of secrets inside of an action, but I don't know why I can see just one of them, while the other returns [FILTERED].
Is it a caching thing, maybe?
I'm just trying to figure out a predictable way to work with this.
As you can see, I can get (and use) the value of the environment variables, but the secrets aren't being exposed.
That's because they're secrets. The Actions output is explicitly scrubbed for secrets, and they're not displayed.
The file contents still contain the secret contents.
Related
An open source project I'm working on has components that require Linux and consequently virtualization has generally been the best solution for development and testing new features. I'm attempting to provide a simple cloud-init file for Multipass that will configure the VM with our code by pulling our files from Git and setting them up in the VM automatically. However, even though extra time elapsed for launch seems to indicate the process is being run, no files seem to actually be saved to the home directory, even for simpler cases, i.e.
runcmd:
- [ cd, ~ ]
- [ touch test ]
- [ echo 'test' > test ]
Am I just misconfiguring cloud-init or am I missing something crucial?
There are a couple of problems going on here.
First, your cloud config user data must begin with the line:
#cloud-config
Without that line, cloud-init doesn't know what to do with it. If you were to submit a user-data configuration like this:
#cloud-config
runcmd:
- [ cd, ~ ]
- [ touch test ]
- [ echo 'test' > test ]
You would find the following errors in /var/log/cloud-init-output.log:
runcmd.0: ['cd', None] is not valid under any of the given schemas
/var/lib/cloud/instance/scripts/runcmd: 2: cd: can't cd to None
/var/lib/cloud/instance/scripts/runcmd: 3: touch test: not found
/var/lib/cloud/instance/scripts/runcmd: 4: echo 'test' > test: not found
You'll find the solution to these problems in the documentation, which includes this note about runcmd:
# run commands
# default: none
# runcmd contains a list of either lists or a string
# each item will be executed in order at rc.local like level with
# output to the console
# - runcmd only runs during the first boot
# - if the item is a list, the items will be properly executed as if
# passed to execve(3) (with the first arg as the command).
# - if the item is a string, it will be simply written to the file and
# will be interpreted by 'sh'
You passed a list of lists, so the behavior is governed by "*if the item is a list, the items will be properly executed as if passed to execve(3) (with the first arg as the command)". In this case, the ~ in [cd, ~] doesn't make any sense -- the command isn't being executed by the shell, so there's nothing to expand ~.
The second two commands include on a single list item, and there is no command on your system named either touch test or echo 'test' > test.
The simplest solution here is to simply pass in a list of strings intead:
#cloud-config
runcmd:
- cd /root
- touch test
- echo 'test' > test
I've replaced cd ~ here with cd /root, because it seems better to be explicit (and you know these commands are running as root anyway).
Im currently trying to figure out how to deploy an gitlab project automatically using ci. I managed to run the building stage successfully, but im unsure how to retrieve and push those builds to the releases.
As far as I know it is possibile to use rsync or webhooks (for example Git-Auto-Deploy) to get the build. However I failed to apply these options successfully.
For publishing releases I did read https://gitlab.com/gitlab-org/gitlab-ce/blob/master/doc/api/tags.md#create-a-new-release, but im not sure if I understand the required pathing schema correctly.
Is there any simple complete example to try out this process?
A way is indeed to use webhooks:
There are tons of different possible solutions to do that. I'd go with a sh script which is invoked by the hook.
How to intercept your webhook is up to the configuration of your server, if you have php-fpm installed you can use a PHP script.
When you create a webhook in your Gitlab project (Settings->Webhooks) you can specify for which kind of events you want the hook (in our case, a new build), and a secret token so you can verify the script has been called by Gitlab.
The PHP script can be something like that:
<?php
// Check token
$security_file = parse_ini_file("../token.ini");
$gitlab_token = $_SERVER["HTTP_X_GITLAB_TOKEN"];
if ($gitlab_token !== $security_file["token"]) {
echo "error 403";
exit(0);
}
// Get data
$json = file_get_contents('php://input');
$data = json_decode($json, true);
// We want only success build on master
if ($data["ref"] !== "master" ||
$data["build_stage"] !== "deploy" ||
$data["build_status"] !== "success") {
exit(0);
}
// Execute the deploy script:
shell_exec("/usr/share/nginx/html/deploy.sh 2>&1");
I created a token.ini file outside the webroot, which is just one line:
token = supersecrettoken
In this way the endpoint can be called only by Gitlab itself. The script then checks some parameters of the build, and if everything is ok it runs the deploy script.
Also the deploy script is very very basic, but there are a couple of interesting things:
#!/bin/bash
# See 'Authentication' section here: http://docs.gitlab.com/ce/api/
SECRET_TOKEN=$PERSONAL_TOKEN
# The path where to put the static files
DEST="/usr/share/nginx/html/"
# The path to use as temporary working directory
TMP="/tmp/"
# Where to save the downloaded file
DOWNLOAD_FILE="site.zip";
cd $TMP;
wget --header="PRIVATE-TOKEN: $SECRET_TOKEN" "https://gitlab.com/api/v3/projects/774560/builds/artifacts/master/download?job=deploy_site" -O $DOWNLOAD_FILE;
ls;
unzip $DOWNLOAD_FILE;
# Whatever, do not do this in a real environment without any other check
rm -rf $DEST;
cp -r _site/ $DEST;
rm -rf _site/;
rm $DOWNLOAD_FILE;
First of all, the script has to be executable (chown +x deploy.sh) and it has to belong to the webserver’s user (usually www-data).
The script needs to have an access token (which you can create here) to access the data. I inserted it as environment variable:
sudo vi /etc/environment
in the file you have to add something like:
PERSONAL_TOKEN="supersecrettoken"
and then remember to reload the file:
source /etc/environment
You can check everything is alright doing sudo -u www-data echo PERSONAL_TOKEN and verify the token is printed in the terminal.
Now, the other interesting part of the script is where is the artifact. The last available build of a branch is reachable only through API; they are working on implementing the API in the web interface so you can always download the last version from the web.
The url of the API is
https://gitlab.example.com/api/v3/projects/projectid/builds/artifacts/branchname/download?job=jobname
While you can imagine what branchname and jobname are, the projectid is a bit more tricky to find.
It is included in the body of the webhook as projectid, but if you do not want to intercept the hook, you can go to the settings of your project, section Triggers, and there are examples of APIs calls: you can determine the project id from there.
I have a python script that I would like to run using rundeck that is invoked as follows:
createInstance.py [-n <name>] <env> <version>
Where name is optional and env and version are required.
e.g. if I want to call the script with a name I would call:
createInstance.py -n test staging 1.2.3.4
If I want to default the name, I would call:
createInstance.py staging 1.2.3.4
The problem i have is that I dont know how to specify the script arguments string in rundeck. I have a job, with 3 options, one for env, version and name and if I define the arguments string as:
-n ${option.name} ${option.env} ${option.version}
Whenever the name is unset, rundeck calls:
createInstance.py -n staging 1.2.3.4
Instead I would like it to omit the -n. Is there any way of doing this? Right now my only option is to change the script to be more forgiving in how it handles the -n, and to always ensure its at the end, e.g.:
createInstance.py staging 1.2.3.4 -n
createInstance.py staging 1.2.3.4 -n test
I would like to avoid making this change though, as I want to be able to use the scripts standalone as well.
Rather than use a command step, try an inline script step. Your inline script can count the number of arguments and if they are set. Then with that logic you can choose how to set the creteInstance.py args.
As #Alex-SF suggests, I've also used an inline script for this, along with a Key Value Data log filter. The script is:
#!/bin/bash
# Parse optional parameters
# https://stackoverflow.com/questions/41233996/passing-optional-parameters-to-rundeck-script
# Arguments to this script should be in the format "flag" "value", eg "-p" ${option.name}
# If value is not missing then return will be "flag value", otherwise blank
echo -n "RUNDECK:DATA:"
while (( "$#" )); do
flag="$1"
value="$2"
if [[ -z "$value" ]] || [[ $value =~ ^\- ]]; then
# no value for this parameter (empty or picking up the next flag)
echo -n ""
shift
else
# value provided for this parameter
echo -n "$flag $value "
shift
shift
fi
done
And the key value data filter uses the pattern ^RUNDECK:DATA:(.*)$ and the name data args. Then I use ${data.args*} as the input for the real command.
It's all rather messy, and I can't find any open issue requesting this as a feature (yet).
Use an inline script and use conditional variable expansion feature from bash.
createInstance.py ${RD_OPTION_NAME:+-n $RD_OPTION_NAME} $RD_OPTION_ENV $RD_OPTION_VERSION
This will omit the first option altogether if it is empty ("").
So I run the following:
gsutil -m cp -R file.png gs://bucket/file.png
And I get the following error message:
Copying file://file.png [Content-Type=application/pdf]...
Uploading file.png: 42.59 KiB/42.59 KiB
AccessDeniedException: 401 Login Required
CommandException: 1 files/objects could not be transferred.
I'm not sure what the problem is since I ran config and I can see all my buckets. Does anyone know what I need to do?
Note: I do not have gcloud, I just installed gsutil and ran the config.
Login to Google Cloud is needed for accessing any Cloud service. You need to use below command which will guide you through login steps like typing verification code you generate by opening browser link given in console.
gcloud auth login
I was getting a similar response, and was able to solve this problem by looking at the read permissions on the .boto file. In my case, I was using a service account and the .boto file that was created by
gsutil config -e
only had read permissions set for user. Since it was being read by a service running as a different user, it wasn't able to read the file and yielding a 401 Login Required error. I fixed it by adding read permissions for the service's group.
In the least sophisticated case, you could fix it by giving any user read permission with
chmod a+r .boto
A more detailed explanation for troubleshooting
To get more information, run the same command with a -D flag, like:
gsutil -m -D cp ....
In the debug output, look at:
Command being run: /path/to/gsutil
config_file_list: /path/to/boto/config
Create your login credentials using the executable at /path/to/gsutil, not gcloud auth or any other gsutil executable on the machine, using:
/path/to/gsutil config
For a service account, use:
/path/to/gsutil config -e
These should create a .boto config file in your home directory, $HOME/.boto. If you are running the gsutil command this file should be referenced in the config_file_list variable in the debug output. If not, see below to change it.
Running gsutil under a service account or as another user
If you are running as another user, and need to reference a newly-created config file, set the environment variable BOTO_CONFIG (don't forget to export it):
BOTO_CONFIG=/path/to/$HOME/.boto
export BOTO_CONFIG
By setting this variable, when you execute gsutil, it will reference the config file you have placed in BOTO_CONFIG. You can confirm that you are referencing the correct config file by looking at the config_file_list variable in the gsutil -D command's output.
make sure the referenced .boto file is readable by the user who is executing the gsutil command
Running the /path/to/gsutil with the BOTO_CONFIG variable set allowed me to execute gsutil as another user, referencing an arbitrary BOTO_CONFIG file that was set up with a service-account's credentials.
To set up the service account:
https://console.cloud.google.com/permissions/serviceaccounts
The key file from the service account set-up process needs to be downloaded, and the path to it is requested during the gsutil config -e step.
This may be an issue with how gsutil/boto handles the OS path separators on Windows, as referenced here. This should eventually get merged into the sdk tools, but until then the following should work:
Go to
google-cloud-sdk\platform\gsutil\third_party\boto\boto\pyami\config.py
and replace the line:
for path in os.environ['BOTO_PATH'].split(':'):
with:
for path in os.environ['BOTO_PATH'].split(os.path.pathsep):
Next, go to
google-cloud-sdk\bin\bootstrapping\gsutil.py
replace the lines that use ':'
if boto_config:
boto_path = ':'.join([boto_config, gsutil_path])
elif boto_path:
# this is ':' for windows as well, hardcoded into the boto source.
boto_path = ':'.join([boto_path, gsutil_path])
else:
path_parts = ['/etc/boto.cfg',
os.path.expanduser(os.path.join('~', '.boto')),
gsutil_path]
boto_path = ':'.join(path_parts)
with
if boto_config:
boto_path = os.path.pathsep.join([boto_config, gsutil_path])
elif boto_path:
# this is ':' for windows as well, hardcoded into the boto source.
boto_path = os.path.pathsep.join([boto_path, gsutil_path])
else:
path_parts = ['/etc/boto.cfg',
os.path.expanduser(os.path.join('~', '.boto')),
gsutil_path]
boto_path = os.path.pathsep.join(path_parts)
Restart cmd and now the error should go away.
Here is a part of my .conf file.
env SERVICE_ROOT="/data/service_root"
env LOG_DIR="$SERVICE_ROOT/logs"
and I checked all variables with following..
echo "\n`env`" >> /tmp/listener.log 2>&1
I expect that $LOG_DIR is "/data/service_root/logs" but what I got is..
SERVICE_ROOT=/data/service_root
LOG_DIR=$SERVICE_ROOT/logs
Did I miss something?
Defined environment variable is not accessible to the Job Configuration File itself.
Upstart allows you to set environment variables which will be accessible to the jobs whose job configuration files they are defined in.
As explained in 8.2 Environment Variables:
Note that a Job Configuration File does not have access to a user's environment variables, not even the superuser. This is not possible since all job processes created are children of init which does not have a user's environment.
Defined variable $SERVICE_ROOT is accessible to defined job.
# /etc/init/test.conf
env SERVICE_ROOT="/data/service_root"
script
export LOG_DIR="$SERVICE_ROOT/logs"
# prints "LOG_DIR='/data/service_root/logs'" to system log
logger -t $0 "LOG_DIR='$LOG_DIR'"
exec /home/vagrant/test.sh >> /tmp/test.log
end script
Variable $LOG_DIR exported in script block is available for processes called within the same block.
#!/bin/bash -e
# /home/vagrant/test.sh
echo "running test.sh"
echo "\n`env`" | grep 'LOG_DIR\|SERVICE_ROOT'
After running sudo start test content of /tmp/test.log will be:
running test.sh
SERVICE_ROOT=/data/service_root
LOG_DIR=/data/service_root/logs
In syslog you will find:
Jul 16 01:39:39 vagrant-ubuntu-raring-64 /proc/self/fd/9: LOG_DIR='/data/service_root/logs'