How do I configure Longhorn backup so it executes some bash scripts in the pod before and after snapshot/backup is taken?
Something similar to Velero's backup hooks.
annotations:
backup.velero.io/backup-volumes: data
pre.hook.backup.velero.io/command: "['/usr/bin/mysql', '-e', '\"flush tables with read lock;\"']"
pre.hook.backup.velero.io/container: mysql
post.hook.backup.velero.io/command: "['/usr/bin/mysql', '-e', '\"unlock tables;\"']"
post.hook.backup.velero.io/container: mysql
Apparently not possible at the moment, according to the longhorn github issue.
You can orchestrate similar behaviour by using volume snapshot
kubectl exec mypod-id -- app_freeze
kubectl apply -f volumesnapshot.yaml
kubectl exec mypod-id -- app_thaw
Where volumesnapshot.yaml is:
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshot
metadata:
name: my-longhorn-snapshot
spec:
volumeSnapshotClassName: longhorn
source:
persistentVolumeClaimName: my-longhorn-pvc
See example for IRIS database: https://community.intersystems.com/post/amazon-eks-and-iris-high-availability-and-backup
Related
As part of POC, I am trying to backup and restore volumes provisioned by the GKE CSI driver in the same GKE cluster. However, the restore fails with no logs to debug.
Steps:
Create volume snapshot class: kubectl create -f vsc.yaml
# vsc.yaml
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshotClass
metadata:
name: csi-gce-vsc
labels:
"velero.io/csi-volumesnapshot-class": "true"
driver: pd.csi.storage.gke.io
deletionPolicy: Delete
Create storage class: kubectl create -f sc.yaml
# sc.yaml
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: pd-example
provisioner: pd.csi.storage.gke.io
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
parameters:
type: pd-standard
Create namespace: kubectl create namespace csi-app
Create a persistent volume claim: kubectl create -f pvc.yaml
# pvc.yaml
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: podpvc
namespace: csi-app
spec:
accessModes:
- ReadWriteOnce
storageClassName: pd-example
resources:
requests:
storage: 6Gi
Create a pod to consume the pvc: kubectl create -f pod.yaml
# pod.yaml
---
apiVersion: v1
kind: Pod
metadata:
name: web-server
namespace: csi-app
spec:
containers:
- name: web-server
image: nginx
volumeMounts:
- mountPath: /var/lib/www/html
name: mypvc
volumes:
- name: mypvc
persistentVolumeClaim:
claimName: podpvc
readOnly: false
Once the pvc is bound, I created the velero backup.
velero backup create test --include-resources=pvc,pv --include-namespaces=csi-app --wait
Output:
Backup request "test" submitted successfully.
Waiting for backup to complete. You may safely press ctrl-c to stop waiting - your backup will continue in the background.
...
Backup completed with status: Completed. You may check for more information using the commands `velero backup describe test` and `velero backup logs test`.
velero describe backup test
Name: test
Namespace: velero
Labels: velero.io/storage-location=default
Annotations: velero.io/source-cluster-k8s-gitversion=v1.21.5-gke.1302
velero.io/source-cluster-k8s-major-version=1
velero.io/source-cluster-k8s-minor-version=21
Phase: Completed
Errors: 0
Warnings: 1
Namespaces:
Included: csi-app
Excluded: <none>
Resources:
Included: pvc, pv
Excluded: <none>
Cluster-scoped: auto
Label selector: <none>
Storage Location: default
Velero-Native Snapshot PVs: auto
TTL: 720h0m0s
Hooks: <none>
Backup Format Version: 1.1.0
Started: 2021-12-22 15:40:08 +0300 +03
Completed: 2021-12-22 15:40:10 +0300 +03
Expiration: 2022-01-21 15:40:08 +0300 +03
Total items to be backed up: 2
Items backed up: 2
Velero-Native Snapshots: <none included>
After the backup is created, I verified the backup was created and was available in my GCS bucket.
Delete all the existing resources to test restore.
kubectl delete -f pod.yaml
kubectl delete -f pvc.yaml
kubectl delete -f sc.yaml
kubectl delete namespace csi-app
Run restore command:
velero restore create --from-backup test --wait
Output:
Restore request "test-20211222154302" submitted successfully.
Waiting for restore to complete. You may safely press ctrl-c to stop waiting - your restore will continue in the background.
.
Restore completed with status: PartiallyFailed. You may check for more information using the commands `velero restore describe test-20211222154302` and `velero restore logs test-20211222154302`.
velero describe or velero logs command doesn't return any description/logs.
What did you expect to happen:
I was expecting the pv, pvc and the namespace get restored.
The following information will help us better understand what's going on:
velero debug --backup test --restore test-20211222154302 command is stuck for more than 10 minutes and I couldn't generate the support bundle.
Output:
2021/12/22 15:45:16 Collecting velero resources in namespace: velero
2021/12/22 15:45:24 Collecting velero deployment logs in namespace: velero
2021/12/22 15:45:28 Collecting log and information for backup: test
Environment:
Velero version (use velero version):
Client:
Version: v1.7.1
Git commit: -
Server:
Version: v1.7.1
Velero features (use velero client config get features):
features:
Kubernetes version (use kubectl version):
Client Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.1", GitCommit:"86ec240af8cbd1b60bcc4c03c20da9b98005b92e", GitTreeState:"clean", BuildDate:"2021-12-16T11:33:37Z", GoVersion:"go1.17.5", Compiler:"gc", Platform:"darwin/arm64"}
Server Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.5-gke.1302", GitCommit:"639f3a74abf258418493e9b75f2f98a08da29733", GitTreeState:"clean", BuildDate:"2021-10-21T21:35:48Z", GoVersion:"go1.16.7b7", Compiler:"gc", Platform:"linux/amd64"}
Kubernetes installer & version:
GKE 1.21.5-gke.1302
Cloud provider or hardware configuration:
GCP
OS (e.g. from /etc/os-release):
GCP Container-Optimized OS (COS)
You should be able to check the logs as mentioned:
Restore completed with status: PartiallyFailed. You may check for more information using the commands `velero restore describe test-20211222154302` and `velero restore logs test-20211222154302`.
velero describe or velero logs command doesn't return any description/logs.
The latter is available after the restore is completed, check for errors in it and it should show what went wrong.
Since you were doing a PV/PVC backup with CSI, you should have Velero setup to support it:
https://kubernetes-csi.github.io/docs/snapshot-restore-feature.html
Depending on the plugin you used, it might have been a bug, like :
https://github.com/vmware-tanzu/velero-plugin-for-csi/pull/122
This should be fixed in the latest 0.3.2 release for example:
https://github.com/vmware-tanzu/velero-plugin-for-csi/releases/tag/v0.3.2
So start with :
velero restore logs test-20211222154302
and go from there. Update the question with the findings and if you resolved it please, thank you.
I want to get all events that occurred in Kubernetes cluster in some python dictionary using maybe some API to extract data from the events that occurred in the past. I found on internet that it is possible by storing all data of Kube-watch on Prometheus and later accessing it. I am unable to figure out how to set it up and see all past pod events in python. Any alternative solutions to access past events are also appreciated. Thanks!
I'll describe a solution that is not complicated and I think meets all your requirements.
There are tools such as Eventrouter that take Kubernetes events and push them to a user specified sink. However, as you mentioned, you only need Pods events, so I suggest a slightly different approach.
In short, you can run the kubectl get events --watch command from within a Pod and collect the output from that command using a log aggregation system like Loki.
Below, I will provide a detailed step-by-step explanation.
1. Running kubectl command from within a Pod
To display only Pod events, you can use:
$ kubectl get events --watch --field-selector involvedObject.kind=Pod
We want to run this command from within a Pod. For security reasons, I've created a separate events-collector ServiceAccount with the view Role assigned and our Pod will run under this ServiceAccount.
NOTE: I've created a Deployment instead of a single Pod.
$ cat all-in-one.yml
apiVersion: v1
kind: ServiceAccount
metadata:
name: events-collector
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: events-collector-binding
subjects:
- kind: ServiceAccount
name: events-collector
namespace: default
roleRef:
kind: ClusterRole
name: view
apiGroup: rbac.authorization.k8s.io
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: events-collector
name: events-collector
spec:
selector:
matchLabels:
app: events-collector
template:
metadata:
labels:
app: events-collector
spec:
serviceAccountName: events-collector
containers:
- image: bitnami/kubectl
name: test
command: ["kubectl"]
args: ["get","events", "--watch", "--field-selector", "involvedObject.kind=Pod"]
After applying the above manifest, the event-collector was created and collects Pod events as expected:
$ kubectl apply -f all-in-one.yml
serviceaccount/events-collector created
clusterrolebinding.rbac.authorization.k8s.io/events-collector-binding created
deployment.apps/events-collector created
$ kubectl get deploy,pod | grep events-collector
deployment.apps/events-collector 1/1 1 1 14s
pod/events-collector-d98d6c5c-xrltj 1/1 Running 0 14s
$ kubectl logs -f events-collector-d98d6c5c-xrltj
LAST SEEN TYPE REASON OBJECT MESSAGE
77s Normal Scheduled pod/app-1-5d9ccdb595-m9d5n Successfully assigned default/app-1-5d9ccdb595-m9d5n to gke-cluster-2-default-pool-8505743b-brmx
76s Normal Pulling pod/app-1-5d9ccdb595-m9d5n Pulling image "nginx"
71s Normal Pulled pod/app-1-5d9ccdb595-m9d5n Successfully pulled image "nginx" in 4.727842954s
70s Normal Created pod/app-1-5d9ccdb595-m9d5n Created container nginx
70s Normal Started pod/app-1-5d9ccdb595-m9d5n Started container nginx
73s Normal Scheduled pod/app-2-7747dcb588-h8j4q Successfully assigned default/app-2-7747dcb588-h8j4q to gke-cluster-2-default-pool-8505743b-p7qt
72s Normal Pulling pod/app-2-7747dcb588-h8j4q Pulling image "nginx"
67s Normal Pulled pod/app-2-7747dcb588-h8j4q Successfully pulled image "nginx" in 4.476795932s
66s Normal Created pod/app-2-7747dcb588-h8j4q Created container nginx
66s Normal Started pod/app-2-7747dcb588-h8j4q Started container nginx
2. Installing Loki
You can install Loki to store logs and process queries. Loki is like Prometheus, but for logs :). The easiest way to install Loki is to use the grafana/loki-stack Helm chart:
$ helm repo add grafana https://grafana.github.io/helm-charts
"grafana" has been added to your repositories
$ helm repo update
...
Update Complete. ⎈Happy Helming!⎈
$ helm upgrade --install loki grafana/loki-stack
$ kubectl get pods | grep loki
loki-0 1/1 Running 0 76s
loki-promtail-hm8kn 1/1 Running 0 76s
loki-promtail-nkv4p 1/1 Running 0 76s
loki-promtail-qfrcr 1/1 Running 0 76s
3. Querying Loki with LogCLI
You can use the LogCLI tool to run LogQL queries against a Loki server. Detailed information on installing and using this tool can be found in the LogCLI documentation. I'll demonstrate how to install it on Linux:
$ wget https://github.com/grafana/loki/releases/download/v2.2.1/logcli-linux-amd64.zip
$ unzip logcli-linux-amd64.zip
Archive: logcli-linux-amd64.zip
inflating: logcli-linux-amd64
$ mv logcli-linux-amd64 logcli
$ sudo cp logcli /bin/
$ whereis logcli
logcli: /bin/logcli
To query the Loki server from outside the Kubernetes cluster, you may need to expose it using the Ingress resource:
$ cat ingress.yml
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
name: loki-ingress
spec:
rules:
- http:
paths:
- backend:
serviceName: loki
servicePort: 3100
path: /
$ kubectl apply -f ingress.yml
ingress.networking.k8s.io/loki-ingress created
$ kubectl get ing
NAME CLASS HOSTS ADDRESS PORTS AGE
loki-ingress <none> * <PUBLIC_IP> 80 19s
Finally, I've created a simple python script that we can use to query the Loki server:
NOTE: We need to set the LOKI_ADDR environment variable as described in the documentation. You need to replace the <PUBLIC_IP> with your Ingress IP.
$ cat query_loki.py
#!/usr/bin/env python3
import os
os.environ['LOKI_ADDR'] = "http://<PUBLIC_IP>"
os.system("logcli query '{app=\"events-collector\"}'")
$ ./query_loki.py
...
2021-07-02T10:33:01Z {} 2021-07-02T10:33:01.626763464Z stdout F 0s Normal Pulling pod/backend-app-5d99cf4b-c9km4 Pulling image "nginx"
2021-07-02T10:33:00Z {} 2021-07-02T10:33:00.836755152Z stdout F 0s Normal Scheduled pod/backend-app-5d99cf4b-c9km4 Successfully assigned default/backend-app-5d99cf4b-c9km4 to gke-cluster-1-default-pool-328bd2b1-288w
2021-07-02T10:33:00Z {} 2021-07-02T10:33:00.649954267Z stdout F 0s Normal Started pod/web-app-6fcf9bb7b8-jbrr9 Started container nginx2021-07-02T10:33:00Z {} 2021-07-02T10:33:00.54819851Z stdout F 0s Normal Created pod/web-app-6fcf9bb7b8-jbrr9 Created container nginx
2021-07-02T10:32:59Z {} 2021-07-02T10:32:59.414571562Z stdout F 0s Normal Pulled pod/web-app-6fcf9bb7b8-jbrr9 Successfully pulled image "nginx" in 4.228468876s
...
Two years ago while I took CKA exam, I already have this question. At that time I only could do was to see k8s.io official documentation. Now just curious on generating pv / pvc / storageClass via pure kubectl cli. What I look for is similar to the similar logic as deployment, for example:
$ kubectl create deploy test --image=nginx --port=80 --dry-run -o yaml
W0419 23:54:11.092265 76572 helpers.go:553] --dry-run is deprecated and can be replaced with --dry-run=client.
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: test
name: test
spec:
replicas: 1
selector:
matchLabels:
app: test
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: test
spec:
containers:
- image: nginx
name: nginx
ports:
- containerPort: 80
resources: {}
status: {}
Or similar logic to run a single pod:
$ kubectl run test-pod --image=nginx --port=80 --dry-run -o yaml
W0419 23:56:29.174692 76654 helpers.go:553] --dry-run is deprecated and can be replaced with --dry-run=client.
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: test-pod
name: test-pod
spec:
containers:
- image: nginx
name: test-pod
ports:
- containerPort: 80
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}
So what should I type in order to generate pv / pvc / storageClass yaml? The current only declarative fastest way:
cat <<EOF | kubectl create -f -
<PV / PVC / storageClass yaml goes here>
EOF
Edited: Please note that I look any fast way to generate correct pv / pvc / storageClass template without remembering specific syntax thru cli, and not necessary via kubectl.
There is no kubectl command to create a resource like PV, PVC, and storage class.
From certificate points of view, you have go over k8.io and look for the PV, PVC, and storage class under the task link.
Under task link, most of the YAML will be the same and for now, this is one of the fastest ways in exam.
TL;DR:
Look, bookmark and build index your brain in all yaml files in this Github directory (content/en/examples/pods) before the exam. 100% legal according to CKA curriculum.
https://github.com/kubernetes/website/tree/master/content/en/examples/pods/storage/pv-volume.yaml
Then use this form during exam:
kubectl create -f https://k8s.io/examples/pods/storage/pv-volume.yaml
In case you need edit and apply:
# curl
curl -sL https://k8s.io/examples/pods/storage/pv-volume.yaml -o /your/path/pv-volume.yaml
# wget
wget -O /your/path/pv-volume.yaml https://k8s.io/examples/pods/storage/pv-volume.yaml
vi /your/path/pv-volume.yaml
kubectl apply -f /your/path/pv-volume.yaml
Story:
Actually after look around for my own answer, there's an article floating around that suggest me to bookmark these 100% legal pages:
https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/#create-a-persistentvolume
https://kubernetes.io/docs/tasks/job/automated-tasks-with-cron-jobs/#creating-a-cron-job
https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/
Note that:
kubectl apply -f https://k8s.io/examples/pods/storage/pv-volume.yaml
kubectl could create objects from URL
Where is the original https://k8s.io pointing to?
What else I could benefit from?
Then after digging up the page above "pods/storage/pv-volume.yaml" code, the link points to:
https://raw.githubusercontent.com/kubernetes/website/master/content/en/examples/pods/storage/pv-volume.yaml
Which direct to:
https://github.com/kubernetes/website/tree/master/content/en/examples/pods
So https://k8s.io is a shorten uri as well as a http 301 redirect to https://github.com/kubernetes/website/tree/master/content/en to help the exam candidate easy to produce (not copy-n-paste) in the exam terminal.
I am new to Kubernetes I am trying to mimic a behavior a bit like what I do with docker-compose when I serve a Couchbase database in a docker container.
couchbase:
image: couchbase
volumes:
- ./couchbase:/opt/couchbase/var
ports:
- 8091-8096:8091-8096
- 11210-11211:11210-11211
I managed to create a cluster in my localhost using a tool called "kind"
kind create cluster --name my-cluster
kubectl config use-context my-cluster
Then I am trying to use that cluster to deploy a Couchbase service
I created a file named couchbase.yaml with the following content (I am trying to mimic what I do with my docker-compose file).
apiVersion: apps/v1
kind: Deployment
metadata:
name: couchbase
namespace: my-project
labels:
platform: couchbase
spec:
replicas: 1
selector:
matchLabels:
platform: couchbase
template:
metadata:
labels:
platform: couchbase
spec:
volumes:
- name: couchbase-data
hostPath:
# directory location on host
path: /home/me/my-project/couchbase
# this field is optional
type: Directory
containers:
- name: couchbase
image: couchbase
volumeMounts:
- mountPath: /opt/couchbase/var
name: couchbase-data
Then I start the deployment like this:
kubectl create namespace my-project
kubectl apply -f couchbase.yaml
kubectl expose deployment -n my-project couchbase --type=LoadBalancer --port=8091
However my deployment never actually start:
kubectl get deployments -n my-project couchbase
NAME READY UP-TO-DATE AVAILABLE AGE
couchbase 0/1 1 0 6m14s
And when I look for the logs I see this:
kubectl logs -n my-project -lplatform=couchbase --all-containers=true
Error from server (BadRequest): container "couchbase" in pod "couchbase-589f7fc4c7-th2r2" is waiting to start: ContainerCreating
As OP mentioned in a comment, issue was solved using extra mount as explained in documentation: https://kind.sigs.k8s.io/docs/user/configuration/#extra-mounts
Here is OP's comment but formated so it's more readable:
the error shows up when I run this command:
kubectl describe pods -n my-project couchbase
I could fix it by creating a new kind cluster:
kind create cluster --config cluster.yaml
Passing this content in cluster.yaml:
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
name: inf
nodes:
- role: control-plane
extraMounts:
- hostPath: /home/me/my-project/couchbase
containerPath: /couchbase
In couchbase.yaml the path becomes path: /couchbase of course.
I have two applications running in K8. APP A has write access to a data store and APP B has read access.
APP A needs to be able to change APP B's running deployment.
How we currently do this is manually by kicking off a process in APP A which adds a new DB in the data store (say db bob). Then we do:
kubectl edit deploy A
And change an environment variable to bob. This starts a rolling restart of all the pods of APP B. We would like to automate this process.
Is there anyway to get APP A to change the deployment config of APP B in k8?
Firstly answering your main question:
Is there anyway to get a service to change the deployment config of another service in k8?
From my understanding you are calling it Service A and B for it's purpose in the real life, but to facilitate understanding I suggested an edit to call them APP A and APP B, because:
In Kubernetes, a Service is an abstraction which defines a logical set of Pods and a policy by which to access them (sometimes this pattern is called a micro-service).
So if in your question you meant:
"Is there anyway to get APP A to change the deployment config of APP B in k8?"
Then Yes, you can give a pod admin privileges to manage other components of the cluster using the kubectl set env command to change/add envs.
In order to achieve this, you will need:
A Service Account with needed permissions in the namespace.
NOTE: In my example below since I don't know if you are working with multiple namespaces I'm using a ClusterRole, granting cluster-admin to a specific user. If you use only 1 namespace for these apps, consider a Role instead.
A ClusterRoleBinding binding the permissions of the service account to a role of the Cluster.
The Kubectl client inside the pod (manually added or modifying the docker-image) on APP A
Steps to Reproduce:
Create a deployment to apply the cluster-admin privileges, I'm naming it manager-deploy.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: manager-deploy
labels:
app: manager
spec:
replicas: 1
selector:
matchLabels:
app: manager
template:
metadata:
labels:
app: manager
spec:
serviceAccountName: k8s-role
containers:
- name: manager
image: gcr.io/google-samples/node-hello:1.0
Create a deployment with a environment var, mocking your Service B. I'm naming it deploy-env.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: env-deploy
labels:
app: env-replace
spec:
replicas: 1
selector:
matchLabels:
app: env-replace
template:
metadata:
labels:
app: env-replace
spec:
serviceAccountName: k8s-role
containers:
- name: env-replace
image: gcr.io/google-samples/node-hello:1.0
env:
- name: DATASTORE_NAME
value: "john"
Create a ServiceAccount and a ClusterRoleBinding with cluster-admin privileges, I'm naming it service-account-for-pod.yaml (notice it's mentioned in manager-deploy.yaml:
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: k8s-role
subjects:
- kind: ServiceAccount
name: k8s-role
namespace: default
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: k8s-role
Apply the service-account-for-pod.yaml, deploy-env.yaml, manager-deploy.yamland list current environment variables from deploy-env pod:
$ kubectl apply -f manager-deploy.yaml
deployment.apps/manager-deploy created
$ kubectl apply -f deploy-env.yaml
deployment.apps/env-deploy created
$ kubectl apply -f service-account-for-pod.yaml
clusterrolebinding.rbac.authorization.k8s.io/k8s-role created
serviceaccount/k8s-role created
$ kubectl exec -it env-deploy-fbd95bb94-hcq75 -- printenv
DATASTORE_NAME=john
Shell into the manager pod, download the kubectl binary and apply the kubectl set env deployment/deployment_name VAR_NAME=VALUE:
$ kubectl exec -it manager-deploy-747c9d5bc8-p684s -- /bin/bash
root#manager-deploy-747c9d5bc8-p684s:/# curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl
root#manager-deploy-747c9d5bc8-p684s:/# chmod +x ./kubectl
root#manager-deploy-747c9d5bc8-p684s:/# mv ./kubectl /usr/local/bin/kubectl
root#manager-deploy-747c9d5bc8-p684s:/# kubectl set env deployment/env-deploy DATASTORE_NAME=bob
Verify the env var value on the pod (notice that the pod is recreated when deployment is modified:
$ kubectl exec -it env-deploy-7f565ffc4-t46zc -- printenv
DATASTORE_NAME=bob
Let me know in the comments if you have any doubt on how to apply this solution to your environment.
You could give service A access to your cluster (install kubectl and allow traffic from that NAT of service A to your cluster master) and with some cron jobs or jenkins / ssh or something that will execute your commands do it. You can also do kubectl patch or get the current config of second deployment kubectl get deployment <name> -o yaml --export > deployment.yaml and edit it there with some regex/awk/sed and then apply although the --export method is getting deprecated so you might aswell on service A download the GIT repo and apply the new config like that.
Thank you for the answers all (upvoted as they were both correct). I am just putting my own answer to document exactly what solved it for me.
In my case I just needed to make use of the patch url available on k8. That plus the this example worked.
All I needed to do was create a service account to restrict who can patch where. Restrict that account to Service A and use the java client in Service A to update the chart of Service B. After that the pods would roll and done.