I have generated the root token during the initialization of the vault using following command.
$ kubectl exec vault-0 -- vault operator init \
-key-shares=1 \
-key-threshold=1 \
-format=json > cluster-keys.json
However, I have lost the file cluster-keys.json.
Is it possible to get the cluster-keys.json content again without re-initializing?
Related
I have not found a way to do so using C# K8s SDK: https://github.com/kubernetes-client/csharp
How to create a AKS Cluster in C#? Basically, the following command:
az aks create -g $RESOURCE_GROUP -n $AKS_CLUSTER \
--enable-addons azure-keyvault-secrets-provider \
--enable-managed-identity \
--node-count $AKS_NODE_COUNT \
--generate-ssh-keys \
--enable-pod-identity \
--network-plugin azure
Sending a PUT request with payload (JSON body) to ARM.
See this: https://learn.microsoft.com/en-us/rest/api/aks/managed-clusters/create-or-update?tabs=HTTP
To make parameters using key vaults available for my azure webapp I've executed the following
identity=`az webapp identity assign \
--name $(appName) \
--resource-group $(appResourceGroupName) \
--query principalId -o tsv`
az keyvault set-policy \
--name $(keyVaultName) \
--secret-permissions get \
--object-id $identity
Now I want to create an azure postgres server taking admin-password from a key vault:
az postgres server create \
--location $(location) \
--resource-group $(ResourceGroupName) \
--name $(PostgresServerName) \
--admin-user $(AdminUserName) \
--admin-password '$(AdminPassWord)' \
--sku-name $(pgSkuName)
If the value of my AdminPassWord is here something like
#Microsoft.KeyVault(SecretUri=https://<myKv>.vault.azure.net/secrets/AdminPassWord/)
I need the single quotes (like above) to get the postgres server created. But does this mean that the password will be the whole string '#Microsoft.KeyVault(SecretUri=https://<myKv>.vault.azure.net/secrets/AdminPassWord/)' instead of the secret stored in <myKv> ?
When running my pipeline without the quotes (i.e. just --admin-password $(AdminPassWord) \) I got the error message syntax error near unexpected token ('. I thought that it could be consequence of the fact that I have't set the policy --secret-permissions get for the resource postgres server. But how can I set it before creating the postgres server ?
The expresssion #Microsoft.KeyVault(SecretUri=https://<myKv>.vault.azure.net/secrets/AdminPassWord/) is used to access the keyvault secret value in azure web app, when you configure it with the first two commands, the managed identity of the web app will be able to access the keyvault secret.
But if you want to create an azure postgres server with the password, you need to obtain the secret value firstly and use it rather than use the expression.
For Azure CLI, you could use az keyvault secret show, then pass the secret to the parameter --admin-password in az postgres server create.
az keyvault secret show [--id]
[--name]
[--query-examples]
[--subscription]
[--vault-name]
[--version]
I have minikube running and I am trying to list the keys on my ETCD.
I downloaded the latest etcdctl client from github:
https://github.com/etcd-io/etcd/releases/download/v3.3.18/etcd-v3.3.18-linux-amd64.tar.gz
I tried to run it with the certificates from /home/myuser/.minikube/certs:
./etcdctl --ca-file /home/myuser/.minikube/certs/ca.pem
--key-file /home/myuser/.minikube/certs/key.pem
--cert-file /home/myuser/.minikube/certs/cert.pem
--endpoints=https://10.240.0.23:2379 get /
I received an error:
Error: client: etcd cluster is unavailable or misconfigured; error
#0: x509: certificate signed by unknown authority
error #0: x509: certificate signed by unknown authority
Did I used the correct certificates ?
I tried different certificates like that:
./etcdctl --ca-file /var/lib/minikube/certs/ca.crt
--key-file /var/lib/minikube/certs/apiserver-etcd-client.key
--cert-file /var/lib/minikube/certs/apiserver-etcd-client.crt
--endpoints=https://10.240.0.23:2379 get /
I received the same error from before.
Any idea what is the problem ?
For minikube the correct path for etcd certificates is: /var/lib/minikube/certs/etcd/ so the command will be like that:
# kubectl -n kube-system exec -it etcd-minikube -- sh -c "ETCDCTL_API=3 ETCDCTL_CACERT=/var/lib/minikube/certs/etcd/ca.crt ETCDCTL_CERT=/var/lib/minikube/certs/etcd/server.crt ETCDCTL_KEY=/var/lib/minikube/certs/etcd/server.key etcdctl endpoint health"
I needed to use the ETCDCTL_API=3 before the commands.
I saw it being used in Kubernetes the Hard Way from this Github.
The location of the certificate are in: /etc/kubernetes/pki/etcd.
The command should work like that:
ETCDCTL_API=3 ./etcdctl --endpoints=https://172.17.0.64:2379 \
--cacert=/etc/kubernetes/pki/etcd/ca.crt \
--cert=/etc/kubernetes/pki/etcd/server.crt \
--key=/etc/kubernetes/pki/etcd/server.key get / --prefix
I tested it and it worked for me.
If you want to dump all etcd entries fully prefixed but from host/outside its container, you could also issue (here for minikube/local testing):
kubectl exec -it \
-n kube-system etcd-minikube \
-- sh -c 'ETCDCTL_CACERT=/var/lib/minikube/certs/etcd/ca.crt \
ETCDCTL_CERT=/var/lib/minikube/certs/etcd/server.crt \
ETCDCTL_KEY=/var/lib/minikube/certs/etcd/server.key \
ETCDCTL_API=3 \
etcdctl \
get \
--prefix=true /'
Try to execute below command:
$ cat /etc/etcd.env to list CA , CERT, KEY directories(actual path).
TLS settings
ETCD_TRUSTED_CA_FILE=/etc/ssl/etcd/ssl/ca.pem
ETCD_CERT_FILE=/etc/ssl/etcd/ssl/member-k8s-m1.pem
ETCD_KEY_FILE=/etc/ssl/etcd/ssl/member-k8s-m1-key.pem
ETCD_CLIENT_CERT_AUTH=true
Then you will be possible to correct use certificates.
Then run command again:
./etcdctl --endpoints https://x.x.x.x:2379
--ca-file=/etc/ssl/etcd/ssl/ca.pem
--cert-file=/etc/ssl/etcd/ssl/member-k8s-m1.pem
--key-file=/etc/ssl/etcd/ssl/member-k8s-m1-key.pem
More information you can find here: etcd-certificates.
I've got a (possible) strange behavior when trying to get secrets from the vault.
Setup:
Vault 1.2.2
Very basic KV secret
Token with the associated policy associated that allows reading this secret.
I successfully can read that secret using vault agent:
root#us-border-proxy# env | grep VAULT
VAULT_TOKEN=BLABLA
VAULT_CACERT=./vault-ca.crt
VAULT_ADDR=https://1.1.1.1:8200
root#us-border-proxy# vault kv get secret/example
=== Data ===
Key Value
--- -----
key SECRETPASSWORD
But the problem starts when I trying to do the same using vault API - I just got 403:
root#us-border-proxy# curl -k -H "X-Vault-Token: BLABLA" -X GET https://1.1.1.1:8200/v1/secret/data/example
{"errors":["1 error occurred:\n\t* permission denied\n\n"]}
What do I miss?
Got your error
When you are listing from CLI, the path you mentioned is secret/example
root#us-border-proxy# vault kv get secret/example
=== Data ===
Key Value
--- -----
key SECRETPASSWORD
But while the path in the curl command is secret/data/example
curl -k -H "X-Vault-Token: BLABLA" -X GET https://1.1.1.1:8200/v1/secret/data/example
So changing to secret/example should work.
I have created a Google Dataproc cluster, but need to install presto as I now have a requirement. Presto is provided as an initialization action on Dataproc here, how can I run this initialization action after creation of the cluster.
Most init actions would probably run even after the cluster is created (though I haven't tried the Presto init action).
I like to run clusters describe to get the instance names, then run something like gcloud compute ssh <NODE> -- -T sudo bash -s < presto.sh for each node. Reference: How to use SSH to run a shell script on a remote machine?.
Notes:
Everything after the -- are args to the normal ssh command
The -T means don't try to create an interactive session (otherwise you'll get a warning like "Pseudo-terminal will not be allocated because stdin is not a terminal.")
I use "sudo bash" because init actions scripts assume they're being run as root.
presto.sh must be a copy of the script on your local machine. You could alternatively ssh and gsutil cp gs://dataproc-initialization-actions/presto/presto.sh . && sudo bash presto.sh.
But #Kanji Hara is correct in general. Spinning up a new cluster is pretty fast/painless, so we advocate using initialization actions when creating a cluster.
You could use initialization-actions parameter
Ex:
gcloud dataproc clusters create $CLUSTERNAME \
--project $PROJECT \
--num-workers $WORKERS \
--bucket $BUCKET \
--master-machine-type $VMMASTER \
--worker-machine-type $VMWORKER \
--initialization-actions \
gs://dataproc-initialization-actions/presto/presto.sh \
--scopes cloud-platform
Maybe this script can help you: https://github.com/kanjih-ciandt/script-dataproc-datalab