How to connect vault CLI to Vault Cluster installed on k8s? - hashicorp-vault

I have installed the vault cluster in k8s (AKS), now i try to connect to that cluster with vault CLI
the problem is i can't find any info or documentation .
i downloaded the vault.exe,
but where do I configure it to connect to the cluster?

You need to export some env to use the vault CLI:
// Your vault server address
$ export VAULT_ADDR=https://127.0.0.1:8200
// vault token
$ export VAULT_TOKEN= "****"
// If your server is secured with TLS
$ export VAULT_CACERT=ca.crt
$ export VAULT_CLIENT_CERT=tls.crt
$ export VAULT_CLIENT_KEY=tls.key
Now, you ready to use the vault CLI.
$ vault status
Key Value
--- -----
Seal Type shamir
Initialized true
Sealed false
... ... ...

There are better ways to connect to vault.
Assuming you deployed vault in the vault namespace you can start shell.
Using kubectl:
kubectl exec -n vault -it vault-0 -- /bin/sh
After which you can login using root token:
vault login $VAULT_ROOT_KEY
Note: you get root token when you do init (assuming the you
followed the official guide to init and unseal vault):
kubectl exec vault-0 -- vault operator init -key-shares=1 -key-threshold=1 -format=json > keys.json
VAULT_UNSEAL_KEY=$(cat keys.json | jq -r ".unseal_keys_b64[]")
echo $VAULT_UNSEAL_KEY
VAULT_ROOT_KEY=$(cat keys.json | jq -r ".root_token")
echo $VAULT_ROOT_KEY
Note 2: The default root token account has a specific root role and is not allowed
manipulating secrets unless you allow it explicitly.
Using UI.
Depending on how you deploy vault there are multiple ways to enable it.
Using config.hcl file value:
ui = true
You can provide config file directly with CLI:
vault server -config vault-server.hcl
or in K8s case providing it as a ConfigMap and a StatefulSet:
apiVersion: v1
kind: ConfigMap
metadata:
name: vault-config
namespace: default
data:
extraconfig-from-values.hcl: |-
disable_mlock = true
ui = true
listener "tcp" {
tls_disable = 1
address = "[::]:8200"
cluster_address = "[::]:8201"
}
storage "file" {
path = "/vault/data"
}
and apply the configmap:
kubectl apply -f configmap.yaml
In the StatefulSet set manifest you use this configmap and mount it as volume and then copy to the proper place to start with it:
part of the manifest
volumes:
- name: config
configMap:
name: vault-config
- name: home
emptyDir: {}
containers:
- name: vault
image: hashicorp/vault:1.8.0
imagePullPolicy: IfNotPresent
command:
- "/bin/sh"
- "-ec"
args:
- |
cp /vault/config/extraconfig-from-values.hcl /tmp/storageconfig.hcl;
[ -n "${HOST_IP}" ] && sed -Ei "s|HOST_IP|${HOST_IP?}|g" /tmp/storageconfig.hcl;
[ -n "${POD_IP}" ] && sed -Ei "s|POD_IP|${POD_IP?}|g" /tmp/storageconfig.hcl;
[ -n "${HOSTNAME}" ] && sed -Ei "s|HOSTNAME|${HOSTNAME?}|g" /tmp/storageconfig.hcl;
[ -n "${API_ADDR}" ] && sed -Ei "s|API_ADDR|${API_ADDR?}|g" /tmp/storageconfig.hcl;
[ -n "${TRANSIT_ADDR}" ] && sed -Ei "s|TRANSIT_ADDR|${TRANSIT_ADDR?}|g" /tmp/storageconfig.hcl;
[ -n "${RAFT_ADDR}" ] && sed -Ei "s|RAFT_ADDR|${RAFT_ADDR?}|g" /tmp/storageconfig.hcl;
/usr/local/bin/docker-entrypoint.sh vault server -config=/tmp/storageconfig.hcl
Finally if you use helm charts for deployment you just need to set values to enable UI:
ui:
# True if you want to create a Service entry for the Vault UI.
#
# serviceType can be used to control the type of service created. For
# example, setting this to "LoadBalancer" will create an external load
# balancer (for supported K8S installations) to access the UI.
enabled: true
publishNotReadyAddresses: true
# The service should only contain selectors for active Vault pod
activeVaultPodOnly: false
serviceType: "NodePort"
serviceNodePort: 31000
externalPort: 8200
targetPort: 8200
With this you expose you UI service on the IP of your node with Nodeport 31000 (e.g. http://192.168.10.100:31000). If you have load-balancer controller installed then you can use serviceType: "LoadBalancer"

Related

How to configure kubectl to act as a service account?

I wish to run a Drone CI/CD pipeline on a Raspberry Pi, including a stage to update a Kubernetes Deployment. Unfortunately, all the pre-built solutions that I've found for doing so (e.g. 1, e.g. ) are not built for arm64 architecture, so I believe I need to build my own.
I am attempting to adapt the commands from here (see also README.md, which describes the authorization required), but my attempt to contact the cluster still fails with authorization problems:
$ cat service-account-definition.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: drone-demo-service-account
namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: drone-demo-service-account-clusterrolebinding
subjects:
- kind: ServiceAccount
name: drone-demo-service-account
namespace: default
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
$ kubectl apply -f service-account-definition.yaml
serviceaccount/drone-demo-service-account created
clusterrolebinding.rbac.authorization.k8s.io/drone-demo-service-account-clusterrolebinding created
$ kubectl get serviceaccount drone-demo-service-account
NAME SECRETS AGE
drone-demo-service-account 1 10s
$ kubectl get secret $(kubectl get secrets | grep 'drone-demo-service-account-token' | cut -f1 -d' ') -o jsonpath='{.data.ca\.crt}' > secrets/cert
$ head -c 10 secrets/cert
LS0tLS1CRU%
$ kubectl get secret $(kubectl get secrets | grep 'drone-demo-service-account-token' | cut -f1 -d' ') -o jsonpath='{.data.token}' | base64 > secrets/token
$ head -c 10 secrets/token
WlhsS2FHSk%
$ cat Dockerfile
FROM busybox
COPY . .
CMD ["./script.sh"]
$ cat script.sh
#!/bin/sh
server=$(cat secrets/server) # Pre-filled
cert=$(cat secrets/cert)
# Added this `tr` call, which is not present in the source I'm working from, after noticing that
# the file-content contains newlines
token=$(cat secrets/token | tr -d '\n')
echo "DEBUG: server is $server, cert is $(echo $cert | head -c 10)..., token is $(echo $token | head -c 10)..."
# Cannot depend on the binami/kubectl image (https://hub.docker.com/r/bitnami/kubectl), because
# it's not available for arm64 - https://github.com/bitnami/charts/issues/7305
wget https://storage.googleapis.com/kubernetes-release/release/v1.19.2/bin/linux/arm64/kubectl
chmod +x kubectl
./kubectl config set-credentials default --token=$token
echo $cert | base64 -d > ca.crt
./kubectl config set-cluster default --server=$server --certificate-authority=ca.crt
./kubectl config set-context default --cluster=default --user=default
./kubectl config use-context default
echo "Done with setup, now cat-ing .kube/config"
echo
cat $HOME/.kube/config
echo "Attempting to get pods"
echo
./kubectl get pods
$ docker build -t stack-overflow-testing . && docker run stack-overflow-testing
Sending build context to Docker daemon 10.75kB
Step 1/3 : FROM busybox
---> 3c277069c6ae
Step 2/3 : COPY . .
---> 74c6a132d255
Step 3/3 : CMD ["./script.sh"]
---> Running in dc55f33f74bb
Removing intermediate container dc55f33f74bb
---> dc68a5d6ba9b
Successfully built dc68a5d6ba9b
Successfully tagged stack-overflow-testing:latest
DEBUG: server is https://rassigma.avril:6443, cert is LS0tLS1CRU..., token is WlhsS2FHSk...
Connecting to storage.googleapis.com (142.250.188.16:443)
wget: note: TLS certificate validation not implemented
saving to 'kubectl'
kubectl 18% |***** | 7118k 0:00:04 ETA
kubectl 43% |************* | 16.5M 0:00:02 ETA
kubectl 68% |********************** | 26.2M 0:00:01 ETA
kubectl 94% |****************************** | 35.8M 0:00:00 ETA
kubectl 100% |********************************| 38.0M 0:00:00 ETA
'kubectl' saved
User "default" set.
Cluster "default" set.
Context "default" created.
Switched to context "default".
Done with setup, now cat-ing .kube/config
apiVersion: v1
clusters:
- cluster:
certificate-authority: /ca.crt
server: https://rassigma.avril:6443
name: default
contexts:
- context:
cluster: default
user: default
name: default
current-context: default
kind: Config
preferences: {}
users:
- name: default
user:
token: WlhsS2FHSkhZM[...REDACTED]
Attempting to get pods
error: You must be logged in to the server (Unauthorized)
If I copy the ~/.kube/config from my laptop to the docker container, kubectl commands succeed as expected - so, this isn't a networking issue, just an authorization one. I do note that my laptop-based ~/.kube/config lists client-certificate-data and client-key-data rather than token under users: user:, but I suspect that's because my base config is recording a non-service-account.
How can I set up kubectl to authorize as a service account?
Some reading I have done that didn't answer the question for me:
kubenetes documentation on AuthN/AuthZ
Google Kubernetes Engine article on service accounts
Configure Service Accounts for Pods (this described how to create and associate the accounts, but not how to act as them)
Two blog posts (1, 2) that refer to Service Accounts
It appears you have used | base64 instead of | base64 --decode

Bitnami Redis on Kubernetes Authentication Failure with Existing Secret

I'm trying to install Redis on Kubernetes environment with Bitnami Redis HELM Chart. I want to use a defined password rather than randomly generated one. But i'm getting error below when i want to connect to redis master or replicas with redis-cli.
I have no name!#redis-client:/$ redis-cli -h redis-master -a $REDIS_PASSWORD
Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
Warning: AUTH failed
I created a Kubernetes secret like this.
---
apiVersion: v1
kind: Secret
metadata:
name: redis-secret
namespace: redis
type: Opaque
data:
redis-password: YWRtaW4xMjM0Cg==
And in values.yaml file i updated auth spec like below.
auth:
enabled: true
sentinel: false
existingSecret: "redis-secret"
existingSecretPasswordKey: "redis-password"
usePasswordFiles: false
If i don't define existingSecret field and use randomly generated password then i can connect without an issue. I also tried AUTH admin1234 after Warning: AUTH failed error but it didn't work either.
You can achieve it in much simpler way i.e. by running:
$ helm install my-release \
--set auth.password="admin1234" \
bitnami/redis
This will update your "my-release-redis" secret, so when you run:
$ kubectl get secrets my-release-redis -o yaml
you'll see it contains your password, already base64-encoded:
apiVersion: v1
data:
redis-password: YWRtaW4xMjM0Cg==
kind: Secret
...
In order to get your password, you need to run:
export REDIS_PASSWORD=$(kubectl get secret --namespace default my-release-redis -o jsonpath="{.data.redis-password}" | base64 --decode)
This will set and export REDIS_PASSWORD environment variable containing your redis password.
And then you may run your redis-client pod:
kubectl run --namespace default redis-client --restart='Never' --env REDIS_PASSWORD=$REDIS_PASSWORD --image docker.io/bitnami/redis:6.2.4-debian-10-r13 --command -- sleep infinity
which will set REDIS_PASSWORD environment variable within your redis-client pod by assigning to it the value of REDIS_PASSWORD set locally in the previous step.
The issue was about how i encoded password with echo command. There was a newline character at the end of my password. I tried with printf command rather than echo and it created a different result.
printf admin1234 | base64

Using a custom certificate for the Kubernetes api server with minikube

I have been trying to find how to do this but so far have found nothing, I am quite new to Kubernetes so I might just have looked over it. I want to use my own certificate for the Kubernetes API server, is this possible? And if so, can someone perhaps give me a link?
Ok, so here is my idea. We know we cannot change cluster certs, but there is other way to do it. We should be able to proxy through ingress.
First we enabled ingres addon:
➜ ~ minikube addons enable ingress
Given tls.crt and tls.key we create a secret (you don't need to do this if you are using certmanager but this requires some additinal steps I am not going to describe here):
➜ ~ kubectl create secret tls my-tls --cert=tls.crt --key tls.key
and an ingress object:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-k8s
annotations:
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
spec:
tls:
- hosts:
- foo.bar.com
secretName: my-tls
rules:
- host: foo.bar.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: kubernetes
port:
number: 443
Notice what docs say about CN and FQDN: k8s docs:
Referencing this secret in an Ingress tells the Ingress controller to secure the channel from the client to the load balancer using TLS. You need to make sure the TLS secret you created came from a certificate that contains a Common Name (CN), also known as a Fully Qualified Domain Name (FQDN) for https-example.foo.com.
The only issue with this approach is that we cannot use certificates for authentication when accessing from the outside.
But we can use tokens. Here is a page in k8s docs: https://kubernetes.io/docs/reference/access-authn-authz/authentication/ that lists all possible methods of authentication.
For testing I choose serviceaccout token but feel free to experiment with others.
Let's create a service account, bind a role to it, and try to access the cluster:
➜ ~ kubectl create sa cadmin
serviceaccount/cadmin created
➜ ~ kubectl create clusterrolebinding --clusterrole cluster-admin --serviceaccount default:cadmin cadminbinding
clusterrolebinding.rbac.authorization.k8s.io/cadminbinding created
Now we follow these instructions: access-cluster-api from docs to try to access the cluster with sa token.
➜ ~ APISERVER=https://$(minikube ip)
➜ ~ TOKEN=$(kubectl get secret $(kubectl get serviceaccount cadmin -o jsonpath='{.secrets[0].name}') -o jsonpath='{.data.token}' | base64 --decode )
➜ ~ curl $APISERVER/api --header "Authorization: Bearer $TOKEN" --insecure -H "Host: foo.bar.com"
{
"kind": "APIVersions",
"versions": [
"v1"
],
"serverAddressByClientCIDRs": [
{
"clientCIDR": "0.0.0.0/0",
"serverAddress": "192.168.39.210:8443"
}
]
}
note: I am testing it with invalid/selfsigned certificates and I don't own the foo.bar.com domain so I need to pass Host header by hand. For you it may look a bit different, so don't just copypate; try to understand what's happening and adjust it. If you have a domain you should be able to access it directly (no $(minikube ip) necessary).
As you should see, it worked! We got a valid response from api server.
But we probably don't want to use curl to access k8s.
Let's create a kubeconfig with the token.
kubectl config set-credentials cadmin --token $TOKEN --kubeconfig my-config
kubectl config set-cluster mini --kubeconfig my-config --server https://foo.bar.com
kubectl config set-context mini --kubeconfig my-config --cluster mini --user cadmin
kubectl config use-context --kubeconfig my-config mini
And now we can access k8s with this config:
➜ ~ kubectl get po --kubeconfig my-config
No resources found in default namespace.
Yes, you can use your own certificate and set inn the Kubernetes API server.
Suppose you have created the certificate move and save them to specific node directory:
{
sudo mkdir -p /var/lib/kubernetes/
sudo mv ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem \
service-account-key.pem service-account.pem \
encryption-config.yaml /var/lib/kubernetes/
}
The instance internal IP address will be used to advertise the API Server to members of the cluster. Get the internal IP:
INTERNAL_IP=$(curl -s -H "Metadata-Flavor: Google" \
http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/ip)
you can crate the service of API server and set it.
Note : Above mentioned example is specifically with consider the GCP instances so you might have to change some commands like.
INTERNAL_IP=$(curl -s -H "Metadata-Flavor: Google" \
http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/0/ip)
for the above command, you can provide the manual bare metal IP list instead of getting from GCP instance API if you are not using it.
Here we go please refer to this link : https://github.com/kelseyhightower/kubernetes-the-hard-way/blob/master/docs/08-bootstrapping-kubernetes-controllers.md#configure-the-kubernetes-api-server
here you can find all the details for creating and setting whole Kubernetes cluster from scratch along will detailed document and commands : https://github.com/kelseyhightower/kubernetes-the-hard-way

How to create custom themes on Keycloak Operator deployment on Kubernetes?

Complete flow is somewhat like this:
Step-1: Applying all the relevant YAMLs
$ sudo kind create cluster --name aftab-cluster --config cluster-config.yaml
$ curl -sL https://github.com/operator-framework/operator-lifecycle-manager/releases/download/v0.17.0/install.sh | bash -s v0.17.0
$ kubectl apply -f keycloak_backup.yaml
$ kubectl apply -f keycloaks_client.yaml
$ kubectl apply -f keycloaks_realm.yaml //Theme configs not there. So, added loginTheme.
loginTheme:
description: Login Theme
type: string
loginWithEmailAllowed:
description: Login with email
type: boolean
$ kubectl apply -f keycloak_users.yaml
$ kubectl apply -f keycloaks_crd.yaml
$ kubectl apply -f namespace.yaml
$ kubectl apply -f role.yaml -n keycloak-namespace
$ kubectl apply -f role_binding.yaml -n keycloak-namespace
$ kubectl apply -f sa.yaml -n keycloak-namespace
$ kubectl apply -f operator.yaml -n keycloak-namespace
$ kubectl apply -f keycloak.yaml -n keycloak-namespace
apiVersion: keycloak.org/v1alpha1
kind: Keycloak
metadata:
name: example-keycloak
labels:
app: sso
spec:
instances: 1
extensions:
- /PATH/FOR/MY/COLOR-THEME/JAR/
externalAccess:
enabled: True
Step-2: Verifing if pods are running. RUNNING HAPPILY.
$ kubectl get po -n keycloak-namespace // I can see podsa are running successfuly.
NAME READY STATUS RESTARTS AGE
keycloak-0 1/1 Running 0 3m13s
keycloak-operator-798747fb9d-2lgzn 1/1 Running 0 4m21s
keycloak-postgresql-85579c4d6d-4tgxj 1/1 Running 0 3m13s
Step-3: Creating a new Realm and client
$ kubectl apply -f my-realm.yaml -n keycloak-namespace
apiVersion: keycloak.org/v1alpha1
kind: KeycloakRealm
metadata:
name: myrealm-realm
labels:
app: myrealm-realm
spec:
realm:
id: "myrealm"
realm: "myrealm"
enabled: True
displayName: "myrealm"
userRegistration: True
registrationAllowed: True
editUsernameAllowed: True
resetPasswordAllowed: True
rememberMe: True
registrationEmailAsUsername: True
loginTheme: "COLOR-THEME" <<<<<<<<<< MY CUSTOM THEME
users:
- username: "admin"
firstName: "Admin"
realmRoles:
- "offline_access"
- "uma_authorization"
$ kubectl apply -f my-client.yaml -n keycloak-namespace
Step-4: Finally, accessed keycloak instance at http://localhost:3010, Working as expected.
Reams, clients, users, etc are looking good. But, my COLOR-THEME not found at the realm setting tab. Only default themes are there (keycloak and base).
directory structure looks like this:
$ ls
cluster-config.yaml keycloak_backup.yaml keycloaks_crd.yaml namespace.yaml role_binding.yaml my-client.yaml
xyz keycloak_users.yaml keycloaks_realm.yaml operator.yaml sa.yaml my_realm.yaml
keycloak.yaml keycloaks_client.yaml keyclok-ing.yaml role.yaml themes myrealm-realm.yaml
How do we use CRDs in order to use or create new Keycloak themes?
For the first part of the question, if you want to add/change a field (i.e., the Realm Theme) that the Keycloak Operator recognizes natively, the only change you will have to do is to add to the each of your Realm CRD, the following:
spec:
realm:
id: Realm_ID
...
loginTheme: "my_login_theme"
For the second part (i.e., create new Keycloak themes):
You can't. First you create the new Theme, add the folders of the new Theme into the Keycloak deployment, then you add to the Keycloak Operator as previously mentioned.
To check if the Keycloak Operator support the loginTheme field search in the file keycloak-operator/deploy/crds/keycloak.org_keycloakrealms.yaml. If it is not there, you will need to add:
loginTheme:
description: Login Theme
type: string
loginWithEmailAllowed:
description: Login with email
type: boolean
Moreover, in the file pkg/apis/keycloak/v1alpha1/keycloakrealm_types.go you need to add that extra field to the KeycloakAPIRealm struct, namely:
type KeycloakAPIRealm struct {
// +kubebuilder:validation:Required
// +optional
ID string `json:"id"`
// Realm name.
// +kubebuilder:validation:Required
Realm string `json:"realm"`
// Realm enabled flag.
// +optional
Enabled bool `json:"enabled"`
// Login Theme name
// +optional
LoginTheme string `json:"loginTheme,omitempty"`
.....
}
build the project and run.

Recommended way to persistently change kube-env variables

We are using elasticsearch/kibana instead of gcp for logging (based on what is described here).
To have fluentd-elsticsearch pod's launched we've set LOGGING_DESTINATION=elasticsearch and ENABLE_NODE_LOGGING="true" in the "Compute Instance Template" -> "Custom metadata" -> "kube-env".
While this works fine when done manually it gets overwritten with every gcloud container clusters upgrade as a new Instance Template with defaults (LOGGING_DESTINATION=gcp ...) is created.
My question is: How do I persist this kind of configuration for GKE/GCE?
I thought about adding a k8s-user-startup-script but that's also defined in the Instance Template and therefore is overwritten by gcloud container clusters upgrade.
I've also tried to add a k8s-user-startup-script to the project metadata but that is not taken into account.
//EDIT
Current workaround (without recreating Instance Template and Instances) for manually switching back to elasticsearch is:
for node in $(kubectl get nodes -o name | cut -f2 -d/); do
gcloud compute ssh $node \
--command="sudo cp -a /srv/salt/fluentd-es/fluentd-es.yaml /etc/kubernetes/manifests/; sudo rm /etc/kubernetes/manifests/fluentd-gcp.yaml";
done
kubelet will pick that up, kill fluentd-gcp and start fluentd-es.
//EDIT #2
Now running a "startup-script" DaemonSet for this:
kind: DaemonSet
apiVersion: extensions/v1beta1
metadata:
name: startup-script
namespace: kube-system
labels:
app: startup-script
spec:
template:
metadata:
labels:
app: startup-script
spec:
hostPID: true
containers:
- name: startup-script
image: gcr.io/google-containers/startup-script:v1
securityContext:
privileged: true
env:
- name: STARTUP_SCRIPT
value: |
#! /bin/bash
set -o errexit
set -o pipefail
set -o nounset
# Replace Google-Cloud-Logging with EFK
if [[ ! -f /etc/kubernetes/manifests/fluentd-es.yaml ]]; then
if [[ -f /home/kubernetes/kube-manifests/kubernetes/fluentd-es.yaml ]]; then
# GCI images
cp -a /home/kubernetes/kube-manifests/kubernetes/fluentd-es.yaml /etc/kubernetes/manifests/
elif [[ -f /srv/salt/fluentd-es/fluentd-es.yaml ]]; then
# Debian based GKE images
cp -a /srv/salt/fluentd-es/fluentd-es.yaml /etc/kubernetes/manifests/
fi
test -f /etc/kubernetes/manifests/fluentd-es.yaml && rm /etc/kubernetes/manifests/fluentd-gcp.yaml
fi
There isn't a fully supported way to reconfigure the kube-env in GKE. As you've found, you can hack the instance template, but this isn't guaranteed to work across upgrades.
An alternative is to create your cluster without gcp logging enabled and then create a DaemonSet that places a fluentd-elasticsearch pod on each of your nodes. Using this technique you don't need to write a (brittle) startup script or rely on the fact that the built-in startup script happens to work when setting LOGGING_DESTINATION=elasticsearch (which may break across upgrades even if it wasn't getting overwritten).