How to authenticate and access Kubernetes cluster for devops pipeline? - kubernetes

Normally you'd do ibmcloud login ⇒ ibmcloud ks cluster-config mycluster ⇒ copy and paste the export KUBECONFIG= and then you can run your kubectl commands.
But if this were being done for some automated devops pipeline outside of IBM Cloud, what is the method for getting authenticating and getting access to the cluster?

You should not copy your kubeconfig to the pipeline. Instead you can create a service account with permissions to a particular namespace and then use its credentials to access the cluster.
What I do is create a service account and role binding like this:
apiVersion: v1
kind: ServiceAccount
metadata:
name: gitlab-tez-dev # account name
namespace: tez-dev #namespace
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: tez-dev-full-access #role
namespace: tez-dev
rules:
- apiGroups: ["", "extensions", "apps"]
resources: ["deployments", "replicasets", "pods", "services"] #resources to which permissions are granted
verbs: ["*"] # what actions are allowed
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: tez-dev-view
namespace: tez-dev
subjects:
- kind: ServiceAccount
name: gitlab-tez-dev
namespace: tez-dev
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: tez-dev-full-access
Then you can get the token for the service account using:
kubectl describe secrets -n <namespace> gitlab-tez-dev-token-<value>
The output:
Name: gitlab-tez-dev-token-lmlwj
Namespace: tez-dev
Labels: <none>
Annotations: kubernetes.io/service-account.name: gitlab-tez-dev
kubernetes.io/service-account.uid: 5f0dae02-7b9c-11e9-a222-0a92bd3a916a
Type: kubernetes.io/service-account-token
Data
====
ca.crt: 1042 bytes
namespace: 7 bytes
token: <TOKEN>
In the above command, namespace is the namespace in which you created the account and the value is the unique value which you will see when you do
kubectl get secret -n <namespace>
Copy the token to your pipeline environment variables or configuration and then you can access it in the pipeline. For example, in gitlab I do (only the part that is relevant here):
k8s-deploy-stage:
stage: deploy
image: lwolf/kubectl_deployer:latest
services:
- docker:dind
only:
refs:
- dev
script:
######## CREATE THE KUBECFG ##########
- kubectl config set-cluster ${K8S_CLUSTER_NAME} --server=${K8S_URL}
- kubectl config set-credentials gitlab-tez-dev --token=${TOKEN}
- kubectl config set-context tez-dev-context --cluster=${K8S_CLUSTER_NAME} --user=gitlab-tez-dev --namespace=tez-dev
- kubectl config use-context tez-dev-context
####### NOW COMMANDS WILL BE EXECUTED AS THE SERVICE ACCOUNT #########
- kubectl apply -f deployment.yml
- kubectl apply -f service.yml
- kubectl rollout status -f deployment.yml

The KUBECONFIG environment variable is a list of paths to Kubernetes configuration files that define one or more (switchable) contexts for kubectl (https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/).
Copy your Kubernetes configuration file to your pipeline agent (~/.kube/config by default) and optionally set the KUBECONFIG environment variable. If you got different contexts in your config file, you may want to remove the ones you don't need in your pipeline before copying it or switch contexts using kubectl config use-context.
Everything you need to connect to your kube api server is inside that config, certs, tokens etc.

If you don't want to copy a token into a file or want to use the API to automate the retrieval of the token, you can also execute some POST commands in order to programmatically retrieve your user token.
The full docs for this are here: https://cloud.ibm.com/docs/containers?topic=containers-cs_cli_install#kube_api
The key piece is retrieving your id token with the POST https://iam.bluemix.net/identity/token call.
The body will return an id_token that you can use in your Kubernetes API calls.

Related

How do I fix a role-based problem when my role appears to have the correct permissions?

I am trying to establish the namespace "sandbox" in Kubernetes and have been using it for several days for several days without issue. Today I got the below error.
I have checked to make sure that I have all of the requisite configmaps in place.
Is there a log or something where I can find what this is referring to?
panic: invalid configuration: no configuration has been provided, try setting KUBERNETES_MASTER environment variable
I did find this (MountVolume.SetUp failed for volume "kube-api-access-fcz9j" : object "default"/"kube-root-ca.crt" not registered) thread and have applied the below patch to my service account, but I am still getting the same error.
automountServiceAccountToken: false
UPDATE:
In answer to #p10l I am working in a bare-metal cluster version 1.23.0. No terraform.
I am getting closer, but still not there.
This appears to be another RBAC problem, but the error does not make sense to me.
I have a user "dma." I am running workflows in the "sandbox" namespace using the context dma#kubernetes
The error now is
Create request failed: workflows.argoproj.io is forbidden: User "dma" cannot create resource "workflows" in API group "argoproj.io" in the namespace "sandbox"
but that user indeed appears to have the correct permissions.
This is the output of
kubectl get role dma -n sandbox -o yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"rbac.authorization.k8s.io/v1","kind":"Role","metadata":{"annotations":{},"name":"dma","namespace":"sandbox"},"rules":[{"apiGroups":["","apps","autoscaling","batch","extensions","policy","rbac.authorization.k8s.io","argoproj.io"],"resources":["pods","configmaps","deployments","events","pods","persistentvolumes","persistentvolumeclaims","services","workflows"],"verbs":["get","list","watch","create","update","patch","delete"]}]}
creationTimestamp: "2021-12-21T19:41:38Z"
name: dma
namespace: sandbox
resourceVersion: "1055045"
uid: 94191881-895d-4457-9764-5db9b54cdb3f
rules:
- apiGroups:
- ""
- apps
- autoscaling
- batch
- extensions
- policy
- rbac.authorization.k8s.io
- argoproj.io
- workflows.argoproj.io
resources:
- pods
- configmaps
- deployments
- events
- pods
- persistentvolumes
- persistentvolumeclaims
- services
- workflows
verbs:
- get
- list
- watch
- create
- update
- patch
- delete
This is the output of kubectl get rolebinding -n sandbox dma-sandbox-rolebinding -o yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"rbac.authorization.k8s.io/v1","kind":"RoleBinding","metadata":{"annotations":{},"name":"dma-sandbox-rolebinding","namespace":"sandbox"},"roleRef":{"apiGroup":"rbac.authorization.k8s.io","kind":"Role","name":"dma"},"subjects":[{"kind":"ServiceAccount","name":"dma","namespace":"sandbox"}]}
creationTimestamp: "2021-12-21T19:56:06Z"
name: dma-sandbox-rolebinding
namespace: sandbox
resourceVersion: "1050593"
uid: d4d53855-b5fc-4f29-8dbd-17f682cc91dd
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: dma
subjects:
- kind: ServiceAccount
name: dma
namespace: sandbox
The issue you are describing is a reoccuring one, described here and here where your cluster lacks KUBECONFIG environment variable.
First, run echo $KUBECONFIG on all your nodes to see if it's empty.
If it is, look for the config file in your cluster, then copy it to all the nodes, then export this variable by running export KUBECONFIG=/path/to/config. This file can be usually found at ~/.kube/config/ or /etc/kubernetes/admin.conf` on master nodes.
Let me know, if this solution worked in your case.

Openshift - Run pod only for specific time period

I'm new to Openshfit. We are using openshift deployments to deploy our multiple microservice (SpringBoot application). The deployment is done from docker image.
We have a situation that we need to stop one micro service alone from Midnight till morning 5 AM ( due to an external dependency ).
Could someone suggest a way to do this automatically?
I was able to run
oc scale deployment/sampleservice--replicas=0 manually to make number of pods as zero and scale up to 1 manually later.
I'm not sure how to run this command on specific time automatically. The CronJob in Openshift should be able to do this. But not sure how to configure cronjob to execute an OC command.
Any guidance will be of great help
Using a cronjob is a good option.
First, you'll need an image that has the oc command line client available. I'm sure there's a prebuilt one out there somewhere, but since this will be running with privileges in your OpenShift cluster you want something you trust, which probably means building it yourself. I used:
FROM quay.io/centos/centos:8
RUN curl -o /tmp/openshift-client.tar.gz \
https://mirror.openshift.com/pub/openshift-v4/clients/ocp/latest/openshift-client-linux.tar.gz; \
tar -C /bin -xf /tmp/openshift-client.tar.gz oc kubectl; \
rm -f /tmp/openshift-client.tar.gz
ENTRYPOINT ["/bin/oc"]
In order to handle authentication correctly, you'll need to create a ServiceAccount and then assign it appropriate privileges through a Role and a RoleBinding. I created a ServiceAccount named oc-client-sa:
apiVersion: v1
kind: ServiceAccount
metadata:
name: oc-client-sa
namespace: oc-client-example
A Role named oc-client-role that grants privileges to Pod and Deployment objects:
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: oc-client-role
namespace: oc-client-example
rules:
- verbs:
- get
- list
- create
- watch
- patch
apiGroups:
- ''
resources:
- pods
- verbs:
- get
- list
- create
- watch
- patch
apiGroups:
- 'apps'
resources:
- deployments
- deployments/scale
And a RoleBinding that connects the oc-client-sa ServiceAccount
to the oc-client-role Role:
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: oc-client-rolebinding
namespace: oc-client-example
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: oc-client-role
subjects:
- kind: ServiceAccount
name: oc-client-sa
With all this in place, we can write a CronJob like this that will
scale down a deployment at a specific time. Note that we're running
the jobs using the oc-client-sa ServiceAccount we created earlier:
apiVersion: batch/v1
kind: CronJob
metadata:
name: scale-web-down
namespace: oc-client-example
spec:
schedule: "00 00 * * *"
concurrencyPolicy: Forbid
jobTemplate:
spec:
template:
spec:
serviceAccountName: oc-client-sa
restartPolicy: Never
containers:
- image: docker.io/larsks/openshift-client
args:
- scale
- deployment/sampleservice
- --replicas=0
name: oc-scale-down
You would write a similar one to scale things back up at 5AM.
The oc client will automatically use the credentials provided to your pod by Kubernetes because of the serviceAccountName setting.
API
You can use the OC rest api client and write the simple python code which will scale down replicas. Pack this python into a docker image and run it as a cronjob inside the OC cluster.
Simple Curl
Run a simple curl inside the cronjob to scale up & down deployment at a certain time.
Here is a simple Curl to scale the deployment: https://docs.openshift.com/container-platform/3.7/rest_api/apis-apps/v1beta1.Deployment.html#Get-apis-apps-v1beta1-namespaces-namespace-deployments-name-scale
API documentation : https://docs.openshift.com/container-platform/3.7/rest_api/apis-apps/v1beta1.Deployment.html
CLI
If you don't want to run code as docker image in cronjob of K8s, you can also run the command, in that case, use the docker image inside cronjob, and fire the command
OC-cli : https://hub.docker.com/r/widerin/openshift-cli
Dont forget authentication is required in both cases either API or running a command inside the cronjob.

RBAC (Role Based Access Control) on K3s

after watching a view videos on RBAC (role based access control) on kubernetes (of which this one was the most transparent for me), I've followed the steps, however on k3s, not k8s as all the sources imply. From what I could gather (not working), the problem isn't with the actual role binding process, but rather the x509 user cert which isn't acknowledged from the API service
$ kubectl get pods --kubeconfig userkubeconfig
error: You must be logged in to the server (Unauthorized)
Also not documented on Rancher's wiki on security for K3s (while documented for their k8s implementation)?, while described for rancher 2.x itself, not sure if it's a problem with my implementation, or a k3s <-> k8s thing.
$ kubectl version --short
Client Version: v1.20.5+k3s1
Server Version: v1.20.5+k3s1
With duplication of the process, my steps are as follows:
Get k3s ca certs
This was described to be under /etc/kubernetes/pki (k8s), however based on this seems to be at /var/lib/rancher/k3s/server/tls/ (server-ca.crt & server-ca.key).
Gen user certs from ca certs
#generate user key
$ openssl genrsa -out user.key 2048
#generate signing request from ca
openssl req -new -key user.key -out user.csr -subj "/CN=user/O=rbac"
# generate user.crt from this
openssl x509 -req -in user.csr -CA server-ca.crt -CAkey server-ca.key -CAcreateserial -out user.crt -days 365
... all good:
Creating kubeConfig file for user, based on the certs:
# Take user.crt and base64 encode to get encoded crt
cat user.crt | base64 -w0
# Take user.key and base64 encode to get encoded key
cat user.key | base64 -w0
Created config file:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: <server-ca.crt base64-encoded>
server: https://<k3s masterIP>:6443
name: home-pi4
contexts:
- context:
cluster: home-pi4
user: user
namespace: rbac
name: user-homepi4
current-context: user-homepi4
kind: Config
preferences: {}
users:
- name: user
user:
client-certificate-data: <user.crt base64-encoded>
client-key-data: <user.key base64-encoded>
Setup role & roleBinding (within specified namespace 'rbac')
role
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: user-rbac
namespace: rbac
rules:
- apiGroups:
- "*"
resources:
- pods
verbs:
- get
- list
roleBinding
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: user-rb
namespace: rbac
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: user-rbac
subjects:
apiGroup: rbac.authorization.k8s.io
kind: User
name: user
After all of this, I get fun times of...
$ kubectl get pods --kubeconfig userkubeconfig
error: You must be logged in to the server (Unauthorized)
Any suggestions please?
Apparently this stackOverflow question presented a solution to the problem, but following the github feed, it came more-or-less down to the same approach followed here (unless I'm missing something)?
As we can find in the Kubernetes Certificate Signing Requests documentation:
A few steps are required in order to get a normal user to be able to authenticate and invoke an API.
I will create an example to illustrate how you can get a normal user who is able to authenticate and invoke an API (I will use the user john as an example).
First, create PKI private key and CSR:
# openssl genrsa -out john.key 2048
NOTE: CN is the name of the user and O is the group that this user will belong to
# openssl req -new -key john.key -out john.csr -subj "/CN=john/O=group1"
# ls
john.csr john.key
Then create a CertificateSigningRequest and submit it to a Kubernetes Cluster via kubectl.
# cat <<EOF | kubectl apply -f -
> apiVersion: certificates.k8s.io/v1
> kind: CertificateSigningRequest
> metadata:
> name: john
> spec:
> groups:
> - system:authenticated
> request: $(cat john.csr | base64 | tr -d '\n')
> signerName: kubernetes.io/kube-apiserver-client
> usages:
> - client auth
> EOF
certificatesigningrequest.certificates.k8s.io/john created
# kubectl get csr
NAME AGE SIGNERNAME REQUESTOR CONDITION
john 39s kubernetes.io/kube-apiserver-client system:admin Pending
# kubectl certificate approve john
certificatesigningrequest.certificates.k8s.io/john approved
# kubectl get csr
NAME AGE SIGNERNAME REQUESTOR CONDITION
john 52s kubernetes.io/kube-apiserver-client system:admin Approved,Issued
Export the issued certificate from the CertificateSigningRequest:
# kubectl get csr john -o jsonpath='{.status.certificate}' | base64 -d > john.crt
# ls
john.crt john.csr john.key
With the certificate created, we can define the Role and RoleBinding for this user to access Kubernetes cluster resources. I will use the Role and RoleBinding similar to yours.
# cat role.yml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: john-role
rules:
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- list
# kubectl apply -f role.yml
role.rbac.authorization.k8s.io/john-role created
# cat rolebinding.yml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: john-binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: john-role
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: john
# kubectl apply -f rolebinding.yml
rolebinding.rbac.authorization.k8s.io/john-binding created
The last step is to add this user into the kubeconfig file (see: Add to kubeconfig)
# kubectl config set-credentials john --client-key=john.key --client-certificate=john.crt --embed-certs=true
User "john" set.
# kubectl config set-context john --cluster=default --user=john
Context "john" created.
Finally, we can change the context to john and check if it works as expected.
# kubectl config use-context john
Switched to context "john".
# kubectl config current-context
john
# kubectl get pods
NAME READY STATUS RESTARTS AGE
web 1/1 Running 0 30m
# kubectl run web-2 --image=nginx
Error from server (Forbidden): pods is forbidden: User "john" cannot create resource "pods" in API group "" in the namespace "default"
As you can see, it works as expected (user john only has get and list permissions).
Thank you matt_j for the example | answer provided to my question. Marked that as the answer, as it was an direct answer to my question regarding RBAC via certificates. In addition to that, I'd also like to provide the an example for RBAC via service accounts, as a variation (for those whom prefer with specific use case).
Service account creation
//kubectl create serviceaccount name -n namespace
$ kubectl create serviceaccount udef -n rbac
This creates the service account + automatically a corresponding secret (udef-token-lhvm8). See with yaml output:
Get token from created secret:
// kubectl describe secret secretName -o yaml
$ kubectl describe secret udef-token-lhvm8 -o yaml
secret will contain 3 objects, (1) ca.crt (2) namespace (3) token
# ... other secret context
Data
====
ca.crt: x bytes
namespace: x bytes
token: xxxx token xxxx
Put token into config file
Can start by getting your 'admin' config file and output to file
// location of **k3s** kubeconfig
$ sudo cat /etc/rancher/k3s/k3s.yaml > /home/{userHomeFolder}/userKubeConfig
Under users section, can replace certificate data with token:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: xxx root ca cert content xxx
server: https://<host IP>:6443
name: home-pi4
contexts:
- context:
cluster: home-pi4
user: nametype
namespace: rbac
name: user-homepi4
current-context: user-homepi4
kind: Config
preferences: {}
users:
- name: nametype
user:
token: xxxx token xxxx
The roles and rolebinding manifests can be created as required, like previously specified (nb within the same namespace), in this case linking to the service account:
# role manifest
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: user-rbac
namespace: rbac
rules:
- apiGroups:
- "*"
resources:
- pods
verbs:
- get
- list
---
# rolebinding manifest
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: user-rb
namespace: rbac
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: user-rbac
subjects:
- kind: ServiceAccount
name: udef
namespace: rbac
With this being done, you will be able to test remotely:
// show pods -> will be allowed
$ kubectl get pods --kubeconfig
..... valid response provided
// get namespaces (or other types of commands) -> should not be allowed
$ kubectl get namespaces --kubeconfig
Error from server (Forbidden): namespaces is forbidden: User bla-bla

How to get kubernetes applications to change deploy configs

I have two applications running in K8. APP A has write access to a data store and APP B has read access.
APP A needs to be able to change APP B's running deployment.
How we currently do this is manually by kicking off a process in APP A which adds a new DB in the data store (say db bob). Then we do:
kubectl edit deploy A
And change an environment variable to bob. This starts a rolling restart of all the pods of APP B. We would like to automate this process.
Is there anyway to get APP A to change the deployment config of APP B in k8?
Firstly answering your main question:
Is there anyway to get a service to change the deployment config of another service in k8?
From my understanding you are calling it Service A and B for it's purpose in the real life, but to facilitate understanding I suggested an edit to call them APP A and APP B, because:
In Kubernetes, a Service is an abstraction which defines a logical set of Pods and a policy by which to access them (sometimes this pattern is called a micro-service).
So if in your question you meant:
"Is there anyway to get APP A to change the deployment config of APP B in k8?"
Then Yes, you can give a pod admin privileges to manage other components of the cluster using the kubectl set env command to change/add envs.
In order to achieve this, you will need:
A Service Account with needed permissions in the namespace.
NOTE: In my example below since I don't know if you are working with multiple namespaces I'm using a ClusterRole, granting cluster-admin to a specific user. If you use only 1 namespace for these apps, consider a Role instead.
A ClusterRoleBinding binding the permissions of the service account to a role of the Cluster.
The Kubectl client inside the pod (manually added or modifying the docker-image) on APP A
Steps to Reproduce:
Create a deployment to apply the cluster-admin privileges, I'm naming it manager-deploy.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: manager-deploy
labels:
app: manager
spec:
replicas: 1
selector:
matchLabels:
app: manager
template:
metadata:
labels:
app: manager
spec:
serviceAccountName: k8s-role
containers:
- name: manager
image: gcr.io/google-samples/node-hello:1.0
Create a deployment with a environment var, mocking your Service B. I'm naming it deploy-env.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: env-deploy
labels:
app: env-replace
spec:
replicas: 1
selector:
matchLabels:
app: env-replace
template:
metadata:
labels:
app: env-replace
spec:
serviceAccountName: k8s-role
containers:
- name: env-replace
image: gcr.io/google-samples/node-hello:1.0
env:
- name: DATASTORE_NAME
value: "john"
Create a ServiceAccount and a ClusterRoleBinding with cluster-admin privileges, I'm naming it service-account-for-pod.yaml (notice it's mentioned in manager-deploy.yaml:
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: k8s-role
subjects:
- kind: ServiceAccount
name: k8s-role
namespace: default
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: k8s-role
Apply the service-account-for-pod.yaml, deploy-env.yaml, manager-deploy.yamland list current environment variables from deploy-env pod:
$ kubectl apply -f manager-deploy.yaml
deployment.apps/manager-deploy created
$ kubectl apply -f deploy-env.yaml
deployment.apps/env-deploy created
$ kubectl apply -f service-account-for-pod.yaml
clusterrolebinding.rbac.authorization.k8s.io/k8s-role created
serviceaccount/k8s-role created
$ kubectl exec -it env-deploy-fbd95bb94-hcq75 -- printenv
DATASTORE_NAME=john
Shell into the manager pod, download the kubectl binary and apply the kubectl set env deployment/deployment_name VAR_NAME=VALUE:
$ kubectl exec -it manager-deploy-747c9d5bc8-p684s -- /bin/bash
root#manager-deploy-747c9d5bc8-p684s:/# curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl
root#manager-deploy-747c9d5bc8-p684s:/# chmod +x ./kubectl
root#manager-deploy-747c9d5bc8-p684s:/# mv ./kubectl /usr/local/bin/kubectl
root#manager-deploy-747c9d5bc8-p684s:/# kubectl set env deployment/env-deploy DATASTORE_NAME=bob
Verify the env var value on the pod (notice that the pod is recreated when deployment is modified:
$ kubectl exec -it env-deploy-7f565ffc4-t46zc -- printenv
DATASTORE_NAME=bob
Let me know in the comments if you have any doubt on how to apply this solution to your environment.
You could give service A access to your cluster (install kubectl and allow traffic from that NAT of service A to your cluster master) and with some cron jobs or jenkins / ssh or something that will execute your commands do it. You can also do kubectl patch or get the current config of second deployment kubectl get deployment <name> -o yaml --export > deployment.yaml and edit it there with some regex/awk/sed and then apply although the --export method is getting deprecated so you might aswell on service A download the GIT repo and apply the new config like that.
Thank you for the answers all (upvoted as they were both correct). I am just putting my own answer to document exactly what solved it for me.
In my case I just needed to make use of the patch url available on k8. That plus the this example worked.
All I needed to do was create a service account to restrict who can patch where. Restrict that account to Service A and use the java client in Service A to update the chart of Service B. After that the pods would roll and done.

Kubernetes log, User "system:serviceaccount:default:default" cannot get services in the namespace

Forbidden!Configured service account doesn't have access. Service account may have been revoked. User "system:serviceaccount:default:default" cannot get services in the namespace "mycomp-services-process"
For the above issue I have created "mycomp-service-process" namespace and checked the issue.
But it shows again message like this:
Message: Forbidden!Configured service account doesn't have access. Service account may have been revoked. User "system:serviceaccount:mycomp-services-process:default" cannot get services in the namespace "mycomp-services-process"
Creating a namespace won't, of course, solve the issue, as that is not the problem at all.
In the first error the issue is that serviceaccount default in default namespace can not get services because it does not have access to list/get services. So what you need to do is assign a role to that user using clusterrolebinding.
Following the set of minimum privileges, you can first create a role which has access to list services:
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: default
name: service-reader
rules:
- apiGroups: [""] # "" indicates the core API group
resources: ["services"]
verbs: ["get", "watch", "list"]
What above snippet does is create a clusterrole which can list, get and watch services. (You will have to create a yaml file and apply above specs)
Now we can use this clusterrole to create a clusterrolebinding:
kubectl create clusterrolebinding service-reader-pod \
--clusterrole=service-reader \
--serviceaccount=default:default
In above command the service-reader-pod is name of clusterrolebinding and it is assigning the service-reader clusterrole to default serviceaccount in default namespace. Similar steps can be followed for the second error you are facing.
In this case I created clusterrole and clusterrolebinding but you might want to create a role and rolebinding instead. You can check the documentation in detail here
This is only for non prod clusters
You should bind service account system:serviceaccount:default:default (which is the default account bound to Pod) with role cluster-admin, just create a yaml (named like fabric8-rbac.yaml) with following contents:
# NOTE: The service account `default:default` already exists in k8s cluster.
# You can create a new account following like this:
#---
#apiVersion: v1
#kind: ServiceAccount
#metadata:
# name: <new-account-name>
# namespace: <namespace>
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: fabric8-rbac
subjects:
- kind: ServiceAccount
# Reference to upper's `metadata.name`
name: default
# Reference to upper's `metadata.namespace`
namespace: default
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
Then, apply it by running kubectl apply -f fabric8-rbac.yaml.
If you want unbind them, just run kubectl delete -f fabric8-rbac.yaml.
Just to add.
This can also occur when you are redeploying an existing application to the wrong Kubernetes cluster that are similar.
Ensure you check to be sure that the Kubernetes cluster you're deploying to is the correct cluster.