I am trying to run some vault commands using a shell script and it gives me an error called,
line 13: vault: command not found
line 14: vault: command not found
But I have already installed vault and and stored a openssl key and certificate using KV secret store and successfully retrieved that key value pair using terminal commands.
This is the shell script that I have used,
#!/bin/sh
PARENT_DIR=sslcertnkeys
CERT_FILE=apache-cert.crt
KEY_FILE=apache-key.key
export VAULT_ADDR='http://127.0.0.1:8200'
export VAULT_TOKEN=s.8AreSPokOQPF9Rs1xp82TDz2
# make parent directory if not exists
[ -e $PARENT_DIR ] || mkdir -p $PARENT_DIR
# remove files if exists
[ -e $CERT_FILE ] && rm -f $CERT_FILE
[ -e $KEY_FILE ] && rm -f $KEY_FILE
# retrieve certificate and key from vault and store in disk
vault kv get -field=certificate certs/apache > $CERT_FILE
vault kv get -field=private_key certs/apache > $KEY_FILE
# unset vault address
unset VAULT_ADDR
# unset vault login token
unset VAULT_TOKEN
# start apache server
systemctl start httpd.service
Can someone help me to figure out what is the problem in here?
Related
When enter the command in powershell I get this error
"invalid argument "Dockerfile2**" for "-t, --tag" flag: invalid reference format: repository name must be lowercase
See docker build --help.
The Dockerfile I created is in word but I saved it as a plain text.
This is what I typed in my Dockerfile.
FROM centos:7
ENV container docker
RUN (cd /lib/systemd/system/sysinit.target.wants/; for i in *; do [ $i == \
systemd-tmpfiles-setup.service ] || rm -f $i; done); \
rm -f /lib/systemd/system/multi-user.target.wants/*;\
rm -f /etc/systemd/system/*.wants/*;\
rm -f /lib/systemd/system/local-fs.target.wants/*; \
rm -f /lib/systemd/system/sockets.target.wants/*udev*; \
rm -f /lib/systemd/system/sockets.target.wants/*initctl*; \
rm -f /lib/systemd/system/basic.target.wants/*;\
rm -f /lib/systemd/system/anaconda.target.wants/*;
VOLUME [ "/sys/fs/cgroup" ]
CMD ["/usr/sbin/init"]
You are tagging your docker image as "Dockerfile2".
You can't use the Upper case letter for tagging your docker file.
change -t parameter from "Dockerfile2" to "dockerfile2" while building docker image.
Based off the error message, when naming tags, you have to have them in lowercase.
Try changing "Dockerfile2" in your command to the all lowercase: "dockerfile2"
Please do the following in order to be able to build it successfully using Powershell:
First check Docker is installed on your system by entering "docker --version" command in your powershell. If you see your docker version, you are good to go, otherwise install docker properly.
Create a simple text (not word document etc.) file called Dockerfile (if you use other file names you will have to specify the file name with -f option)
Paste your dockerfile entries in it and save the file
In your powershell, go the path that includes your Dockerfile and run "docker build -t ."
check your new image by running "docker image ls"
In my environment, your file was built successfully but there was a warning regarding one of your command in your Dockerfile entries:
[WARNING]: Empty continuation line found in:
RUN (cd /lib/systemd/system/sysinit.target.wants/; for i in ; do [ $i == systemd-tmpfiles-setup.service ] || rm -f $i; done); rm -f /lib/systemd/system/multi-usemd-tr.target.wants/;rm -f /etc/systemd/system/.wants/;rm -f /lib/systemd/system/local-.getmpfiles-setup.servifs.target.wants/; rm -f /lib/systemd/system/sockets.target.wants/udev; rm -f /lib/..wa/systemd/system/.wa.wants/;rm -f /lib/systemd/system/anaconda.target.wants/*; tm/sts.target.wants/ude
[WARNING]: Empty continuation lines will become errors in a future release.
I have set up my master nodes using kubeadm.
Now I want to run the join command on my nodes so that the later join the cluster.
All I have to do is run
kubeadm join --token <token> --discovery-token-ca-cert-hash <sha256>
where <token> and are values previously returned by the command below:
kubeadm init
I am also trying to script the above process and I see that parsing the actual tokens from the last command is kinda difficult;
So I was wandering whether there is a way to explicitly specify the <token> and the <sha256> during cluster initialization, to avoid having to perform hacky parsing of the init command.
I was trying to make a script for it as well.
In order to get the values needed I am using these commands:
TOKEN=$(sshpass -p $PASSWORD ssh -o StrictHostKeyChecking=no root#$MASTER_IP sudo kubeadm token list | tail -1 | cut -f 1 -d " ")
HASH=$(sshpass -p $PASSWORD ssh -o StrictHostKeyChecking=no root#$MASTER_IP openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //' )
Basically I use this commands to ssh on master and get this values.
I have not found a easier way to achieve this.
Actually there seems to be a way around this:
(I am putting this in ansible tasks cause this is where I am planning to use it)
- name: kubernetes.yml --> Initiate kubernetes cluster
shell: 'kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address={{ ansible_facts[if_name]["ipv4"]["address"] }}'
become: yes
when: inventory_hostname in groups['masters']
- name: kubernetes.yml --> Get the join command
shell: kubeadm token create --print-join-command
register: rv_join_command
when: inventory_hostname in (groups['masters'] | last)
become: yes
- name: kubernetes.yml --> Print the join command
debug:
var: rv_join_command.stdout
Output:
TASK [kubernetes.yml --> Print the join command] *******************************
ok: [kubernetes-master-1] =>
rv_join_command.stdout: 'kubeadm join 192.168.30.1:6443 --token ah0dbr.grxg9fke3c28dif3i --discovery-token-ca-cert-hash sha256:716712ca7f07bfb4aa7df9a8b30ik3t0k3t2259b8c6fc7b68f50334356078 '
I'm having a trouble setting up access to kubernetes cluster from the outside. This is what I'm trying to achieve:
- Have ability to access to kube cluster from the outside (from nodes that are not "master" and even from any remote) to be able to do kube actions only on specific namespace.
My logic was to do following:
Create new namespace (let's call it testns)
Create service account (testns-account)
Create role which gives access for creating any type of kube resource inside testns namespace
Create role binding which binds service account with role
Generate token from service account
Now, my logic was that I need to have token + api server URL to access kube cluster with limited "permissions" but that doesnt seem like it is enough.
What would be the easiest way to achieve this? For start, I could have access with kubectl just to verify that limited permissions on namespace work but eventually, I would have some client side code which doing the access and creates kube resources with these limited permissions.
You need to generate a kubeconfig from the token. There are scripts to handle this. Here it is for posterity:
!/usr/bin/env bash
# Copyright 2017, Z Lab Corporation. All rights reserved.
# Copyright 2017, Kubernetes scripts contributors
#
# For the full copyright and license information, please view the LICENSE
# file that was distributed with this source code.
set -e
if [[ $# == 0 ]]; then
echo "Usage: $0 SERVICEACCOUNT [kubectl options]" >&2
echo "" >&2
echo "This script creates a kubeconfig to access the apiserver with the specified serviceaccount and outputs it to stdout." >&2
exit 1
fi
function _kubectl() {
kubectl $# $kubectl_options
}
serviceaccount="$1"
kubectl_options="${#:2}"
if ! secret="$(_kubectl get serviceaccount "$serviceaccount" -o 'jsonpath={.secrets[0].name}' 2>/dev/null)"; then
echo "serviceaccounts \"$serviceaccount\" not found." >&2
exit 2
fi
if [[ -z "$secret" ]]; then
echo "serviceaccounts \"$serviceaccount\" doesn't have a serviceaccount token." >&2
exit 2
fi
# context
context="$(_kubectl config current-context)"
# cluster
cluster="$(_kubectl config view -o "jsonpath={.contexts[?(#.name==\"$context\")].context.cluster}")"
server="$(_kubectl config view -o "jsonpath={.clusters[?(#.name==\"$cluster\")].cluster.server}")"
# token
ca_crt_data="$(_kubectl get secret "$secret" -o "jsonpath={.data.ca\.crt}" | openssl enc -d -base64 -A)"
namespace="$(_kubectl get secret "$secret" -o "jsonpath={.data.namespace}" | openssl enc -d -base64 -A)"
token="$(_kubectl get secret "$secret" -o "jsonpath={.data.token}" | openssl enc -d -base64 -A)"
export KUBECONFIG="$(mktemp)"
kubectl config set-credentials "$serviceaccount" --token="$token" >/dev/null
ca_crt="$(mktemp)"; echo "$ca_crt_data" > $ca_crt
kubectl config set-cluster "$cluster" --server="$server" --certificate-authority="$ca_crt" --embed-certs >/dev/null
kubectl config set-context "$context" --cluster="$cluster" --namespace="$namespace" --user="$serviceaccount" >/dev/null
kubectl config use-context "$context" >/dev/null
cat "$KUBECONFIG"
I want to access my github repositories via ssh. When I access the repository for the first time, I am asked If I want to add the github ssh server to my known_hosts file, which works fine. That request also shows me the RSA key fingerprint of that server and I can manually verify that it is the same that is provided by github here.
These are the SHA256 hashes shown in OpenSSH 6.8 and newer (in base64 format):
SHA256:nThbg6kXUpJWGl7E1IGOCspRomTxdCARLviKw6E5SY8 (RSA)
SHA256:br9IjFspm1vxR3iA35FWE+4VTyz1hYVLIE2t1/CeyWQ (DSA)
The problem is that I want to prevent that request by adding a public key
to my known_hosts file before my first access to my git repository. This can be done by using the ssh-keyscan -t rsa www.github.com command which will give me a public key in the format required by the known_hosts file. But people mention repeatedly, that this is not safe and is vulnerable to man-in-the-middle attacks. What they do not mention is how to do it right.
So how can I use the RSA fingerprint provided on the github page to safely get the public host key of the ssh server? I am more or less looking for an option to the ssh-keyscan command that lets me add the expected rsa fingerprint and causes the command to fail if the hosts fingerprint does not match the given one.
Thank you for your time!
I would not use ssh-keyscan in that case.
Rather, I would use it and double-check the result by comparing its fingerprint with the one provided by GitHub.
And then proceed with an SSH GitHub test, to check I do get:
Hi username! You've successfully authenticated, but GitHub does not
provide shell access.
So, as recommended here, for the manual process:
ssh-keyscan github.com >> githubKey
Generate the fingerprint:
ssh-keygen -lf githubKey
Compare it with the ones provided by GitHub
Finally, copy githubKey content to your ~/.ssh/known_hosts file.
You can automate that process (still including the fingerprint step check) with wercker/step-add-to-known_hosts: it is a wercker step, but can be extrapolated as its own independent script.
- add-to-known_hosts:
hostname: github.com
fingerprint: 16:27:ac:a5:76:28:2d:36:63:1b:56:4d:eb:df:a6:48
type: rsa
But that would lack the check against help.github.com/articles/github-s-ssh-key-fingerprints: see below.
Using nmap does not help much, as explained here:
using nmap to get the SSH host key fingerprint and then comparing it to what ssh-keyscan says the fingerprint: In both cases, the fingerprint comes from the same place.
It's just as vulnerable to MITM as any other of these automated solutions.
The only secure and valid way to verify an SSH public key is over some trusted out-of-band channel. (Or set up some kind of key-signing infrastructure.)
Here, help.github.com/articles/github-s-ssh-key-fingerprints remains the "trusted out-of-band channel".
Based on VonC's answer, the script below can verify and add the key automatically. Use it like this:
$ ./add-key.sh github.com nThbg6kXUpJWGl7E1IGOCspRomTxdCARLviKw6E5SY8
It tells you whether it successfully verified and saved the fingerprint.
For usage info, use ./add-key.sh --help
The script:
#!/usr/bin/env bash
# Settings
knownhosts="$HOME/.ssh/known_hosts"
if [ "x$1" == "x-h" ] || [ "x$1" == "x--help" ] || [ ${#1} == 0 ]; then
echo "Usage: $0 <host> <fingerprint> [<port>]"
echo "Example: $0 github.com nThbg6kXUpJWGl7E1IGOCspRomTxdCARLviKw6E5SY8"
echo "The default port is 22."
echo "The script will download the ssh keys from <host>, check if any match"
echo "the <fingerprint>, and add that one to $knownhosts."
exit 1
fi
# Argument handling
host=$1
fingerprint=$2
port=$(if [ -n "$3" ]; then echo "$3"; else echo 22; fi)
# Download the actual key (you cannot convert a fingerprint to the original key)
keys="$(ssh-keyscan -p $port $host |& grep -v ^\#)";
echo "$keys" | grep -v "^$host" # Show any errors
keys="$(echo "$keys" | grep "^$host")"; # Remove errors from the variable
if [ ${#keys} -lt 20 ]; then echo Error downloading keys; exit 2; fi
# Find which line contains the key matching this fingerprint
line=$(ssh-keygen -lf <(echo "$keys") | grep -n "$fingerprint" | cut -b 1-1)
if [ ${#line} -gt 0 ]; then # If there was a matching fingerprint (todo: shouldn't this be -ge or so?)
# Take that line
key=$(head -$line <(echo "$keys") | tail -1)
# Check if the key part (column 3) of that line is already in $knownhosts
if [ -n "$(grep "$(echo "$key" | awk '{print $3}')" $knownhosts)" ]; then
echo "Key already in $knownhosts."
exit 3
else
# Add it to known hosts
echo "$key" >> $knownhosts
# And tell the user what kind of key they just added
keytype=$(echo "$key" | awk '{print $2}')
echo Fingerprint verified and $keytype key added to $knownhosts
fi
else # If there was no matching fingerprint
echo MITM? These are the received fingerprints:
ssh-keygen -lf <(echo "$keys")
echo Generated from these received keys:
echo "$keys"
exit 1
fi
My one-liner allows for error reporting on failure:
touch ~/.ssh/known_hosts && if [ $(grep -c 'github.com ssh-rsa' ~/.ssh/known_hosts) -lt 1 ]; then KEYS=$(KEYS=$(ssh-keyscan github.com 2>&1 | grep -v '#'); ssh-keygen -lf <(echo $KEYS) || echo $KEYS); if [[ $KEYS =~ '(RSA)' ]]; then if [ $(curl -s https://help.github.com/en/github/authenticating-to-github/githubs-ssh-key-fingerprints | grep -c $(echo $KEYS | awk '{print $2}')) -gt 0 ]; then echo '[GitHub key successfully verified]' && ssh-keyscan github.com 1>~/.ssh/known_hosts; fi; else echo \"ssh-keygen -lf failed:\\n$KEYS\"; exit 1; fi; unset KEYS; fi
GitHub now offers this information in its Meta API, see About GitHub's IP addresses. The JSON output includes the public SSH keys, so assuming your HTTPS client correctly verifies the certificate chain, you can fetch the keys from there.
Below is an Ansible task that accomplishes this:
# Copyright 2022 Google LLC.
# SPDX-License-Identifier: Apache-2.0
- name: Add github.com public keys to known_hosts
ansible.builtin.known_hosts:
path: /etc/ssh/ssh_known_hosts
name: github.com
# Download the keys from the GitHub API and prepend 'github.com' to them to
# match the known_hosts format.
key: |
{% for key in (lookup('ansible.builtin.url',
'https://api.github.com/meta',
split_lines=False, validate_certs=True)
|from_json)['ssh_keys'] %}
github.com {{ key }}
{% endfor %}
As user, gsutil works nice.
gsutil works nice when called from crontab (user).
As root, gsutil says:
Caught non-retryable exception while listing gs://....: ServiceException: 401 Anonymous users does not have storage.objects.list access to bucket ...."
gsutil does not work when called from Anacron (root).
Other scripts called from Anacron run nice.
The ~/.boto file contains credentials, and is located in user HOME directory.
So maybe that is causing the exception.
I tried setting BOTO_CONFIG, but it didn't change results:
$ gsutil -D ls 2>&1 | grep config_file_list
config_file_list: ['/home/wolfv/.boto']
$ sudo gsutil -D ls 2>&1 | grep config_file_list
config_file_list: []
$ BOTO_CONFIG="/root/.boto"
$ sudo gsutil -D ls 2>&1 | grep config_file_list
config_file_list: []
How to setup gsutil to run from Anacron?
$ gsutil -D
gsutil version: 4.22
checksum: 2434a37a663d09ae21d1644f64ce60ca (OK)
boto version: 2.42.0
python version: 2.7.13 (default, Jan 12 2017, 17:59:37) [GCC 6.3.1 20161221 (Red Hat 6.3.1-1)]
OS: Linux 4.9.11-200.fc25.x86_64
multiprocessing available: True
using cloud sdk: True
config path: /home/wolfv/.boto
gsutil path: /home/wolfv/Downloads/google-cloud-sdk/platform/gsutil/gsutil
compiled crcmod: True
installed via package manager: False
editable install: False
Command being run: /home/wolfv/Downloads/google-cloud-sdk/platform/gsutil/gsutil -o GSUtil:default_project_id=redacted -D
config_file_list: ['/home/wolfv/.config/gcloud/legacy_credentials/redacted/.boto', '/home/wolfv/.boto']
config: [('debug', '0'), ('working_dir', '/mnt/pyami'), ('https_validate_certificates', 'True'), ('debug', '0'), ('working_dir', '/mnt/pyami'), ('content_language', 'en'), ('default_api_version', '2'), ('default_project_id', 'redacted')]
UPDATE_1
export BOTO_CONFIG worked for the terminal:
$ sudo -s
[root] # export BOTO_CONFIG=/home/wolfv/.boto
[root] # gsutil -D ls 2>&1 | grep config_file_list
config_file_list: ['/home/wolfv/.boto']
[root] # vi /root/.bashrc
add this line to end of .bashrc:
export BOTO_CONFIG=/home/wolfv/.boto
exit
open new terminal and test the new BOTO_CONFIG in bash.rc
$ sudo -s
[root] # gsutil -D ls 2>&1 | grep config_file_list
config_file_list: ['/home/wolfv/.boto']
exit
Unfortunately export BOTO_CONFIG in /root/.bashrc did not help Anacron call gsutil.
The backup log shows that Anacron called the backup script, and the backup script call to gsutil failed.
Does it matter in which initialization script sets path BOTO_CONFIG?
To make the path permanently accessible to Anacron (root), in which file should set BOTO_CONFIG?:
/etc/profile
/root/.bash_profile
/root/.bashrc
UPDATE_2
My credentials are now invlalid, probably from some change I made.
Here is my attempt at houglum's suggestions for BOTO_CONFIG.
First authorize login to get that out of the way:
$ gcloud auth login
Your browser has been opened to visit:
https://accounts.google.com/o/oauth2/auth?redirect_uri=http%3A%2F%2Flocalhost%3A8085%2F&prompt=select_account&response_type=code&client_id=redacted.apps.googleusercontent.com&scope=https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fuserinfo.email+https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fcloud-platform+https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fappengine.admin+https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fcompute+https%3A%2F%2Fwww.googleapis.com%2Fauth%2Faccounts.reauth&access_type=offline
Created new window in existing browser session.
WARNING: `gcloud auth login` no longer writes application default credentials.
If you need to use ADC, see:
gcloud auth application-default --help
You are now logged in as [edacted].
Your current project is [redacted]. You can change this setting by running:
$ gcloud config set project PROJECT_ID
Defining BOTO_CONFIG inline does not work:
$ BOTO_CONFIG=/home/wolfv/.boto gsutil ls
Your credentials are invalid. Please run
$ gcloud auth login
Exporting BOTO_CONFIG does not work:
$ export BOTO_CONFIG=/home/wolfv/.boto; gsutil ls
Your credentials are invalid. Please run
$ gcloud auth login
Sourcing bashrc does not work:
$ ls /home/wolfv/.bashrc
/home/wolfv/.bashrc
$ . /home/wolfv/.bashrc; gsutil ls
Your credentials are invalid. Please run
$ gcloud auth login
UPDATE_3
My credentials work if I remove my credentials from .boto, and use auth login instead (based on Your credentials are invalid. Please run $ gcloud auth login)
$ gcloud auth login redacted#email.com
WARNING: `gcloud auth login` no longer writes application default credentials.
If you need to use ADC, see:
gcloud auth application-default --help
You are now logged in as [redacted#email.com].
Your current project is [redacted-123]. You can change this setting by running:
$ gcloud config set project PROJECT_ID
After using auth login, gsutil works from the terminal:
$ gsutil ls
gs://redacted/
gs://redacted/
gs://redacted/
And the backup script that calls gsutil also works from the terminal:
$ ~/scripts/backup_to_gcs/backup_to_gcs.sh
backup_to_gcs.sh in progress ...
backup_to_gcs.sh completed successfully
However, backup_to_gcs.sh fails when called from crontab.
How to run gsutil from crontab?
UPDATE_4
This is in my anacron file:
1 10 anacron_test_id BOTO_PATH=/home/wolfv/.config/gcloud/legacy_credentials/wolfvolpi#gmail.com/.boto:/home/wolfv/.boto /home/wolfv/scripts/backup_to_gcs/backup_to_gcs.sh
anacron runs the backup_to_gcs.sh script as expected, but the backup fails.
When backup_to_gcs.sh script is called from command line, it works fine.
Probably because gsutil runs as user, but does not run as root:
$ gsutil ls
gs://wolfv/
gs://wolfv-test-log/
gs://wolfv2/
gs://wolfvtest/
$ BOTO_PATH=/home/wolfv/.config/gcloud/legacy_credentials/wolfvolpi#gmail.com/.boto:/home/wolfv/.boto gsutil ls
gs://wolfv/
gs://wolfv-test-log/
gs://wolfv2/
gs://wolfvtest/
$ sudo BOTO_PATH=/home/wolfv/.config/gcloud/legacy_credentials/wolfvolpi#gmail.com/.boto:/home/wolfv/.boto gsutil ls
sudo: gsutil: command not found
$ sudo gsutil ls
sudo: gsutil: command not found
Two days ago root was able to run gsutil.
Since then I used dnf history rollback to uninstall a different software.
Could that have effected gsutil authentication?
UPDATE_5
I followed the instructions on https://cloud.google.com/storage/docs/authentication#gsutilauth
USING SERVICE ACCOUNT
$ gcloud auth activate-service-account --key-file=/home/wolfv/REDACTED.json
Activated service account credentials for: [REDACTED#appspot.gserviceaccount.com]
But still, root could not run gsutil:
$ sudo gsutil ls
sudo: gsutil: command not found
$ gsutil ls -la gs://wolfvtest/test_lifecycle/
CommandException: You have multiple types of configured credentials (['Oauth 2.0 User Account', 'OAuth 2.0 Service Account']), which is not supported. One common way this happens is if you run gsutil config to create credentials and later run gcloud auth, and create a second set of credentials. Your boto config path is: ['/home/wolfv/.boto', '/home/wolfv/.config/gcloud/legacy_credentials/my-project#appspot.gserviceaccount.com/.boto']. For more help, see "gsutil help creds".
The help referse to a page that no longer mentions "auth" https://developers.google.com/cloud/sdk/gcloud/#gcloud.auth
So I have one too many credentials:
$ gsutil -D
...
config_file_list: ['/home/wolfv/.boto', '/home/wolfv/.config/gcloud/legacy_credentials/my-project#appspot.gserviceaccount.com/.boto']
Are any of these credentials used by root (for anacron)?
They are not in the root directory.
Should credintals needed for anacron be in the root directory?
UPDATE_5
I tried again after installing Fedora 26 on How to authorize root to run gsutil?
When you execute BOTO_CONFIG=<value> in the shell, you're not actually defining an environment variable, but rather a local shell variable (see this thread for more details). You want to either define the variable inline with the command:
BOTO_CONFIG=/path/to/config gsutil ls
or first export the BOTO_CONFIG environment variable, then run the gsutil command:
export BOTO_CONFIG=/path/to/config; gsutil ls
EDIT:
I just noticed that in addition to your own $HOME/.boto file, you're relying on gcloud's credentials that get set up from gcloud auth login. When you run this, gcloud creates another .boto file for you, and when you run gsutil from gcloud's wrapper script, it loads that .boto file first, followed by whatever .boto file(s) you specify with either the BOTO_CONFIG or BOTO_PATH environment variable.
If you want to run as root (which the cron job does) and use both those .boto files, you'll need to instead use the BOTO_PATH variable to list them, separated by colons, also making sure the BOTO_CONFIG environment variable is not set (BOTO_CONFIG takes precedence over BOTO_PATH... the gsutil docs mention this briefly):
BOTO_PATH=/home/wolfv/.config/gcloud/legacy_credentials/REDACTED/.boto:/home/wolfv/.boto gcloud ls
EDIT 2:
1) When you get the error "sudo: gsutil: command not found", it means that the root user cannot find the gsutil executable in its PATH. You should use the absolute path to the gsutil executable instead -- from your post, it looks like this is /home/wolfv/Downloads/google-cloud-sdk/platform/gsutil/gsutil.
2) When you activate service account credentials, the gcloud wrapper for gsutil will create a separate .boto file (with a path containing legacy_credentials/myproject#appspot[...]), and prefer to use this one if it's present. It contains the attribute gs_service_key_file, while your other .boto file probably contains gs_oauth2_refresh_token -- loading multiple .boto files with multiple credentials attributes like this will result in the error you're seeing.
If you want to use gcloud to manage your auth credentials, you generally shouldn't put anything under the [Credentials] section of your $HOME/.boto file.