Unable to fetch Vault Token for Pod Service Account - kubernetes

I am using Vault CSI Driver on Charmed Kubernetes v1.19 where I'm trying to retrieve secrets from Vault for a pod running in a separate namespace (webapp) with its own service account (webapp-sa) following the steps in the blog.
As I have been able to understand so far, the Pod is trying authenticate to the Kubernetes API, so that later it can generate a Vault token to access the secret from Vault.
$ kubectl get po webapp
NAME READY STATUS RESTARTS AGE
webapp 0/1 ContainerCreating 0 22m
It appears to me there's some issue authenticating with the Kubernetes API.
The pod remains stuck in the Container Creating state with the message - failed to create a service account token for requesting pod
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 35m default-scheduler Successfully assigned webapp/webapp to host-03
Warning FailedMount 4m38s (x23 over 35m) kubelet MountVolume.SetUp failed for volume "secrets-store-inline" : rpc error: code = Unknown desc = failed to mount secrets store objects for pod webapp/webapp, err: rpc error: code = Unknown desc = error making mount request: **failed to create a service account token for requesting pod** {webapp xxxx webapp webapp-sa}: the server could not find the requested resource
I can get the vault token using cli in the pod namespace:
$ vault write auth/kubernetes/login role=database jwt=$SA_JWT_TOKEN
Key Value
--- -----
token <snipped>
I do get the vault token using the API as well:
$ curl --request POST --data #payload.json https://127.0.0.1:8200/v1/auth/kubernetes/login
{
"request_id":"1234",
<snipped>
"auth":{
"client_token":"XyZ",
"accessor":"abc",
"policies":[
"default",
"webapp-policy"
],
"token_policies":[
"default",
"webapp-policy"
],
"metadata":{
"role":"database",
"service_account_name":"webapp-sa",
"service_account_namespace":"webapp",
"service_account_secret_name":"webapp-sa-token-abcd",
"service_account_uid":"123456"
},
<snipped>
}
}
Reference: https://www.vaultproject.io/docs/auth/kubernetes
As per the vault documentation, I've configured Vault with the Token Reviewer SA as follows:
$ cat vault-auth-service-account.yaml
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: role-token-review-binding
namespace: vault
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:auth-delegator
subjects:
- kind: ServiceAccount
name: vault-auth
namespace: vault
Vault is configured with JWT from the Token Reviewer SA as follows:
$ vault write auth/kubernetes/config \
token_reviewer_jwt="< TOKEN Reviewer service account JWT>" \
kubernetes_host="https://$KUBERNETES_PORT_443_TCP_ADDR:443"
kubernetes_ca_cert=#ca.crt
I have defined a Vault Role to allow the webapp-sa access to the secret:
$ vault write auth/kubernetes/role/database \
bound_service_account_names=webapp-sa \
bound_service_account_namespaces=webapp \
policies=webapp-policy \
ttl=72h
Success! Data written to: auth/kubernetes/role/database
The webapp-sa is allowed access to the secret as per the Vault Policy defined as follows:
$ vault policy write webapp-policy - <<EOF
> path "secret/data/db-pass" {
> capabilities = ["read"]
> }
> EOF
Success! Uploaded policy: webapp-policy
Pod and it's SA is defined as follows:
$ cat webapp-sa-and-pod.yaml
kind: ServiceAccount
apiVersion: v1
metadata:
name: webapp-sa
---
kind: Pod
apiVersion: v1
metadata:
name: webapp
spec:
serviceAccountName: webapp-sa
containers:
- image: registry/jweissig/app:0.0.1
name: webapp
volumeMounts:
- name: secrets-store-inline
mountPath: "/mnt/secrets-store"
readOnly: true
volumes:
- name: secrets-store-inline
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
providerName: vault
secretProviderClass: "vault-database"
Does anyone have any clue as to why the Pod won't authenticate with
the Kubernetes API?
Do I have to enable flags on the kube-apiserver for Token Review API
to work?
Is it enabled by default on Charmed Kubernetes v1.19?
Would be grateful for any help.
Regards,
Sana

Related

How can you use a private gitlab container registry to pull an image in kubernetes?

I have a private docker registry hosted on gitlab and I would like to use this repository to pull images for my local kubernetes cluster:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 68m
K8s is on v1.22.5 and is a single-node cluster that comes 'out of the box' with Docker Desktop. I have already built and deployed an image to the gitlab container registry registry.gitlab.com. What I have done already:
Executed the command docker login -u <username> -p <password> registry.gitlab.com
Modified the ~/.docker/config.json file to the following:
{
"auths": {
"registry.gitlab.com": {}
},
"credsStore": "osxkeychain"
}
Created and deployed a secret to the cluster with the file:
apiVersion: v1
kind: Secret
metadata:
name: registry-key
data:
.dockerconfigjson: <base-64-encoded-.config.json-file>
type: kubernetes.io/dockerconfigjson
Deployed an app with the following file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-deployment
labels:
app: test-app
spec:
replicas: 1
selector:
matchLabels:
app: test-app
template:
metadata:
labels:
app: test-app
spec:
imagePullSecrets:
- name: registry-key
containers:
- name: test-app
image: registry.gitlab.com/<image-name>:latest
imagePullPolicy: Always
ports:
- containerPort: 80
The deployment is created successfully but upon inspection of the pod (kubectl describe pod) I find the following events:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 21s default-scheduler Successfully assigned default/test-deployment-87b5747b5-xdsl9 to docker-desktop
Normal BackOff 19s kubelet Back-off pulling image "registry.gitlab.com/<image-name>:latest"
Warning Failed 19s kubelet Error: ImagePullBackOff
Normal Pulling 7s (x2 over 20s) kubelet Pulling image "registry.gitlab.com/<image-name>:latest"
Warning Failed 7s (x2 over 19s) kubelet Failed to pull image "registry.gitlab.com/<image-name>:latest": rpc error: code = Unknown desc = Error response from daemon: Head "https://registry.gitlab.com/v2/<image-name>/manifests/latest": denied: access forbidden
Warning Failed 7s (x2 over 19s) kubelet Error: ErrImagePull
Please provide any information that might be causing these errors.
I managed to solve the issue by editing the default config.json produced by $ docker login:
{
"auths": {
"registry.gitlab.com": {}
},
"credsStore": "osxkeychain"
}
becomes
{
"auths": {
"registry.gitlab.com": {
"auth":"<access-token-in-plain-text>"
}
}
}
Thanks Bala for suggesting this in the comments. I realise storing the access token in plain text in the file may not be secure but this can be changed to use a path if needed.
I also created the secret as per OzzieFZI's suggestion:
$ kubectl create secret docker-registry registry-key \
--docker-server=registry.gitlab.com \
--docker-username=<username> \
--docker-password="$(cat /path/to/token.txt)"
What password do you use?
Confirm if you are using a Personal Access Token with read/write access to the container registry. Your username should be the gitlab username.
I would suggest creating the docker registry secret using kubectl and a txt file with the token as the content, this way you do not have to encode the dockerconfigjson yourself. Here is an example.
$ kubectl create secret docker-registry registry-key \
--docker-server=registry.gitlab.com \
--docker-username=<username> \
--docker-password="$(cat /path/to/token.txt)"
See documentation on the command here
Here's something a bit more detailed in case anyone is having problems with this. Also gitlab has introdcuced deploy tokens from the repository -> deploy tokens tab, which means you do not need to use personal access tokens.
Create auth to put in secret resource
#!/bin/bash
if [ "$#" -ne 1 ]; then
printf "Invalid number of arguments" >&2
printf "./create_registry_secret.sh <GITLAB_DEPLOY_TOKEN>" >&2
exit 1;
fi
secret_gen_string='{"auths":{"https://registry.gitlab.com":{"username":"{{USER}}","password":"{{TOKEN}}","email":"{{EMAIL}}","auth":"{{SECRET}}"}}}'
gitlab_user=<YOUR_DEPLOY_TOKEN_USER>
gitlab_token=$1
gitlab_email=<YOUR_EMAIL_OR_WHATEVER>
gitlab_secret=$(echo -n "$gitlab_user:$gitlab_token" | base64 -w 0)
echo -n $secret_gen_string \
| sed "s/{{USER}}/$gitlab_user/" \
| sed "s/{{TOKEN}}/$gitlab_token/" \
| sed "s/{{EMAIL}}/$gitlab_email/" \
| sed "s/{{SECRET}}/$gitlab_secret/" \
| base64 -w 0
Use the output of the script in secret resource
# A secret to pull container from gitlab registry
apiVersion: v1
kind: Secret
type: kubernetes.io/dockerconfigjson
metadata:
name: gitlab-pull-secret
data:
.dockerconfigjson: <GENERATED_SECRET>
Reference the secret in container definition
apiVersion: apps/v1
kind: Deployment
metadata:
name: gitlab-test-deployment
labels:
app.kubernetes.io/name: gitlab-test
spec:
selector:
matchLabels:
app.kubernetes.io/name: gitlab-test
replicas: 1
template:
metadata:
labels:
app.kubernetes.io/name: gitlab-test
spec:
containers:
- name: my-gitlab-container
image: registry.gitlab.com/group/project/image:tag
imagePullPolicy: Always
ports:
- containerPort: 3000
# Include the authentication for gitlab container registry
imagePullSecrets:
- name: gitlab-pull-secret

RBAC (Role Based Access Control) on K3s

after watching a view videos on RBAC (role based access control) on kubernetes (of which this one was the most transparent for me), I've followed the steps, however on k3s, not k8s as all the sources imply. From what I could gather (not working), the problem isn't with the actual role binding process, but rather the x509 user cert which isn't acknowledged from the API service
$ kubectl get pods --kubeconfig userkubeconfig
error: You must be logged in to the server (Unauthorized)
Also not documented on Rancher's wiki on security for K3s (while documented for their k8s implementation)?, while described for rancher 2.x itself, not sure if it's a problem with my implementation, or a k3s <-> k8s thing.
$ kubectl version --short
Client Version: v1.20.5+k3s1
Server Version: v1.20.5+k3s1
With duplication of the process, my steps are as follows:
Get k3s ca certs
This was described to be under /etc/kubernetes/pki (k8s), however based on this seems to be at /var/lib/rancher/k3s/server/tls/ (server-ca.crt & server-ca.key).
Gen user certs from ca certs
#generate user key
$ openssl genrsa -out user.key 2048
#generate signing request from ca
openssl req -new -key user.key -out user.csr -subj "/CN=user/O=rbac"
# generate user.crt from this
openssl x509 -req -in user.csr -CA server-ca.crt -CAkey server-ca.key -CAcreateserial -out user.crt -days 365
... all good:
Creating kubeConfig file for user, based on the certs:
# Take user.crt and base64 encode to get encoded crt
cat user.crt | base64 -w0
# Take user.key and base64 encode to get encoded key
cat user.key | base64 -w0
Created config file:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: <server-ca.crt base64-encoded>
server: https://<k3s masterIP>:6443
name: home-pi4
contexts:
- context:
cluster: home-pi4
user: user
namespace: rbac
name: user-homepi4
current-context: user-homepi4
kind: Config
preferences: {}
users:
- name: user
user:
client-certificate-data: <user.crt base64-encoded>
client-key-data: <user.key base64-encoded>
Setup role & roleBinding (within specified namespace 'rbac')
role
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: user-rbac
namespace: rbac
rules:
- apiGroups:
- "*"
resources:
- pods
verbs:
- get
- list
roleBinding
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: user-rb
namespace: rbac
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: user-rbac
subjects:
apiGroup: rbac.authorization.k8s.io
kind: User
name: user
After all of this, I get fun times of...
$ kubectl get pods --kubeconfig userkubeconfig
error: You must be logged in to the server (Unauthorized)
Any suggestions please?
Apparently this stackOverflow question presented a solution to the problem, but following the github feed, it came more-or-less down to the same approach followed here (unless I'm missing something)?
As we can find in the Kubernetes Certificate Signing Requests documentation:
A few steps are required in order to get a normal user to be able to authenticate and invoke an API.
I will create an example to illustrate how you can get a normal user who is able to authenticate and invoke an API (I will use the user john as an example).
First, create PKI private key and CSR:
# openssl genrsa -out john.key 2048
NOTE: CN is the name of the user and O is the group that this user will belong to
# openssl req -new -key john.key -out john.csr -subj "/CN=john/O=group1"
# ls
john.csr john.key
Then create a CertificateSigningRequest and submit it to a Kubernetes Cluster via kubectl.
# cat <<EOF | kubectl apply -f -
> apiVersion: certificates.k8s.io/v1
> kind: CertificateSigningRequest
> metadata:
> name: john
> spec:
> groups:
> - system:authenticated
> request: $(cat john.csr | base64 | tr -d '\n')
> signerName: kubernetes.io/kube-apiserver-client
> usages:
> - client auth
> EOF
certificatesigningrequest.certificates.k8s.io/john created
# kubectl get csr
NAME AGE SIGNERNAME REQUESTOR CONDITION
john 39s kubernetes.io/kube-apiserver-client system:admin Pending
# kubectl certificate approve john
certificatesigningrequest.certificates.k8s.io/john approved
# kubectl get csr
NAME AGE SIGNERNAME REQUESTOR CONDITION
john 52s kubernetes.io/kube-apiserver-client system:admin Approved,Issued
Export the issued certificate from the CertificateSigningRequest:
# kubectl get csr john -o jsonpath='{.status.certificate}' | base64 -d > john.crt
# ls
john.crt john.csr john.key
With the certificate created, we can define the Role and RoleBinding for this user to access Kubernetes cluster resources. I will use the Role and RoleBinding similar to yours.
# cat role.yml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: john-role
rules:
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- list
# kubectl apply -f role.yml
role.rbac.authorization.k8s.io/john-role created
# cat rolebinding.yml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: john-binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: john-role
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: john
# kubectl apply -f rolebinding.yml
rolebinding.rbac.authorization.k8s.io/john-binding created
The last step is to add this user into the kubeconfig file (see: Add to kubeconfig)
# kubectl config set-credentials john --client-key=john.key --client-certificate=john.crt --embed-certs=true
User "john" set.
# kubectl config set-context john --cluster=default --user=john
Context "john" created.
Finally, we can change the context to john and check if it works as expected.
# kubectl config use-context john
Switched to context "john".
# kubectl config current-context
john
# kubectl get pods
NAME READY STATUS RESTARTS AGE
web 1/1 Running 0 30m
# kubectl run web-2 --image=nginx
Error from server (Forbidden): pods is forbidden: User "john" cannot create resource "pods" in API group "" in the namespace "default"
As you can see, it works as expected (user john only has get and list permissions).
Thank you matt_j for the example | answer provided to my question. Marked that as the answer, as it was an direct answer to my question regarding RBAC via certificates. In addition to that, I'd also like to provide the an example for RBAC via service accounts, as a variation (for those whom prefer with specific use case).
Service account creation
//kubectl create serviceaccount name -n namespace
$ kubectl create serviceaccount udef -n rbac
This creates the service account + automatically a corresponding secret (udef-token-lhvm8). See with yaml output:
Get token from created secret:
// kubectl describe secret secretName -o yaml
$ kubectl describe secret udef-token-lhvm8 -o yaml
secret will contain 3 objects, (1) ca.crt (2) namespace (3) token
# ... other secret context
Data
====
ca.crt: x bytes
namespace: x bytes
token: xxxx token xxxx
Put token into config file
Can start by getting your 'admin' config file and output to file
// location of **k3s** kubeconfig
$ sudo cat /etc/rancher/k3s/k3s.yaml > /home/{userHomeFolder}/userKubeConfig
Under users section, can replace certificate data with token:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: xxx root ca cert content xxx
server: https://<host IP>:6443
name: home-pi4
contexts:
- context:
cluster: home-pi4
user: nametype
namespace: rbac
name: user-homepi4
current-context: user-homepi4
kind: Config
preferences: {}
users:
- name: nametype
user:
token: xxxx token xxxx
The roles and rolebinding manifests can be created as required, like previously specified (nb within the same namespace), in this case linking to the service account:
# role manifest
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: user-rbac
namespace: rbac
rules:
- apiGroups:
- "*"
resources:
- pods
verbs:
- get
- list
---
# rolebinding manifest
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: user-rb
namespace: rbac
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: user-rbac
subjects:
- kind: ServiceAccount
name: udef
namespace: rbac
With this being done, you will be able to test remotely:
// show pods -> will be allowed
$ kubectl get pods --kubeconfig
..... valid response provided
// get namespaces (or other types of commands) -> should not be allowed
$ kubectl get namespaces --kubeconfig
Error from server (Forbidden): namespaces is forbidden: User bla-bla

How can I determine whether Kubernetes is using authentication for a image repository?

I'm trying to investigate why a pod has a status of ImagePullBackOff.
If kubectl describe the pod I see an event listed :
Warning Failed 5m42s (x4 over 7m2s) kubelet Failed
to pull image
"**********************":
rpc error: code = Unknown desc = Error response from daemon:
unauthorized: You don't have the needed permissions to perform this
operation, and you may have invalid credentials. To authenticate your
request, follow the steps in:
https://cloud.google.com/container-registry/docs/advanced-authentication
This is not expected as I docker authentication set for the default service account - via a secret as mentioned here: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#add-image-pull-secret-to-service-account
How can I determine whether it's using the correct authentication so I can further debug this issue?
Not really an answer to the question but a solution in my case:
Seems there is something wrong with the kubectl patch serviceaccount default -p '{"imagepullsecrets": [{"name": "gcp-cr-read-access"}]}' as it does not seem to do anything...
Instead I edited the service account resource directly - no patch...
Demonstarted here:
root#docker-ubuntu-s-1vcpu-1gb-lon1-01:~/multitenant-manager# kubectl patch serviceaccount default -p '{"imagepullsecrets": [{"name": "gcp-cr-read-access"}]}'
serviceaccount/default patched (no change)
root#docker-ubuntu-s-1vcpu-1gb-lon1-01:~/multitenant-manager# kubectl describe serviceaccount default
Name: default
Namespace: app-1
Labels: <none>
Annotations: <none>
Image pull secrets: <none>
Mountable secrets: default-token-tqp58
Tokens: default-token-tqp58
Events: <none>
root#docker-ubuntu-s-1vcpu-1gb-lon1-01:~/multitenant-manager# kubectl get serviceaccount -o yaml
apiVersion: v1
items:
- apiVersion: v1
kind: ServiceAccount
metadata:
creationTimestamp: "2020-09-17T15:50:34Z"
name: default
namespace: app-1
resourceVersion: "111538"
selfLink: /api/v1/namespaces/app-1/serviceaccounts/default
uid: 5fe21574-67bf-485c-b9aa-d09c1fe3350c
secrets:
- name: default-token-tqp58
kind: List
metadata:
resourceVersion: ""
selfLink: ""
root#docker-ubuntu-s-1vcpu-1gb-lon1-01:~/multitenant-manager# kubectl patch -n app-1 serviceaccount default -p '{"imagepullsecrets": [{"name": "gcp-cr-read-access"}]}'
serviceaccount/default patched (no change)

Does RBAC rules apply to pods?

I have the following pod definition, (notice the explicitly set service account and secret):
apiVersion: v1
kind: Pod
metadata:
name: pod-service-account-example
labels:
name: pod-service-account-example
spec:
serviceAccountName: example-sa
containers:
- name: busybox
image: busybox:latest
command: ["sleep", "10000000"]
env:
- name: SECRET_KEY
valueFrom:
secretKeyRef:
name: example-secret
key: secret-key-123
It successfully runs. However if I use the the same service account example-sa, and try to retrieve the example-secret it fails:
kubectl get secret example-secret
Error from server (Forbidden): secrets "example-secret" is forbidden: User "system:serviceaccount:default:example-sa" cannot get resource "secrets" in API group "" in the namespace "default"
Does RBAC not apply for pods? Why is the pod able to retrieve the secret if not?
RBAC applies to service accounts, groups, users and not to pods.When you refer a secret in the env of a pod , service account is not being used to get the secret.Kubelet is getting the secret by using its own kubernetes client credential. Since kubelet is using its own credential to get the secret it does not matter whether the service account has RBAC to get secret or not because its not used.
Service account is used when you want to invoke Kubernetes API from a pod using kubernetes standard client library or Kubectl.
Code snippet of Kubelet for reference.

How to authenticate and access Kubernetes cluster for devops pipeline?

Normally you'd do ibmcloud login ⇒ ibmcloud ks cluster-config mycluster ⇒ copy and paste the export KUBECONFIG= and then you can run your kubectl commands.
But if this were being done for some automated devops pipeline outside of IBM Cloud, what is the method for getting authenticating and getting access to the cluster?
You should not copy your kubeconfig to the pipeline. Instead you can create a service account with permissions to a particular namespace and then use its credentials to access the cluster.
What I do is create a service account and role binding like this:
apiVersion: v1
kind: ServiceAccount
metadata:
name: gitlab-tez-dev # account name
namespace: tez-dev #namespace
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: tez-dev-full-access #role
namespace: tez-dev
rules:
- apiGroups: ["", "extensions", "apps"]
resources: ["deployments", "replicasets", "pods", "services"] #resources to which permissions are granted
verbs: ["*"] # what actions are allowed
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: tez-dev-view
namespace: tez-dev
subjects:
- kind: ServiceAccount
name: gitlab-tez-dev
namespace: tez-dev
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: tez-dev-full-access
Then you can get the token for the service account using:
kubectl describe secrets -n <namespace> gitlab-tez-dev-token-<value>
The output:
Name: gitlab-tez-dev-token-lmlwj
Namespace: tez-dev
Labels: <none>
Annotations: kubernetes.io/service-account.name: gitlab-tez-dev
kubernetes.io/service-account.uid: 5f0dae02-7b9c-11e9-a222-0a92bd3a916a
Type: kubernetes.io/service-account-token
Data
====
ca.crt: 1042 bytes
namespace: 7 bytes
token: <TOKEN>
In the above command, namespace is the namespace in which you created the account and the value is the unique value which you will see when you do
kubectl get secret -n <namespace>
Copy the token to your pipeline environment variables or configuration and then you can access it in the pipeline. For example, in gitlab I do (only the part that is relevant here):
k8s-deploy-stage:
stage: deploy
image: lwolf/kubectl_deployer:latest
services:
- docker:dind
only:
refs:
- dev
script:
######## CREATE THE KUBECFG ##########
- kubectl config set-cluster ${K8S_CLUSTER_NAME} --server=${K8S_URL}
- kubectl config set-credentials gitlab-tez-dev --token=${TOKEN}
- kubectl config set-context tez-dev-context --cluster=${K8S_CLUSTER_NAME} --user=gitlab-tez-dev --namespace=tez-dev
- kubectl config use-context tez-dev-context
####### NOW COMMANDS WILL BE EXECUTED AS THE SERVICE ACCOUNT #########
- kubectl apply -f deployment.yml
- kubectl apply -f service.yml
- kubectl rollout status -f deployment.yml
The KUBECONFIG environment variable is a list of paths to Kubernetes configuration files that define one or more (switchable) contexts for kubectl (https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/).
Copy your Kubernetes configuration file to your pipeline agent (~/.kube/config by default) and optionally set the KUBECONFIG environment variable. If you got different contexts in your config file, you may want to remove the ones you don't need in your pipeline before copying it or switch contexts using kubectl config use-context.
Everything you need to connect to your kube api server is inside that config, certs, tokens etc.
If you don't want to copy a token into a file or want to use the API to automate the retrieval of the token, you can also execute some POST commands in order to programmatically retrieve your user token.
The full docs for this are here: https://cloud.ibm.com/docs/containers?topic=containers-cs_cli_install#kube_api
The key piece is retrieving your id token with the POST https://iam.bluemix.net/identity/token call.
The body will return an id_token that you can use in your Kubernetes API calls.