Unable to update K8s manifest file by sed - sed

I have the below YAML file as like below:
apiVersion: v1
items:
- apiVersion: v1
kind: ServiceAccount
metadata:
creationTimestamp: 2019-03-20T23:17:42Z
name: default
namespace: dev4
resourceVersion: "80999"
selfLink: /api/v1/namespaces/dev4/serviceaccounts/default
uid: 5c6e0d09-4b66-11e9-b4e3-0a779a87bb40
secrets:
- name: default-token-tl4dd
- apiVersion: v1
kind: ServiceAccount
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"ServiceAccount","metadata":{"annotations":{},"name":"pod-labeler","namespace":"dev4"}}
creationTimestamp: 2020-04-21T05:46:25Z
name: pod-labeler
namespace: dev4
resourceVersion: "113455688"
selfLink: /api/v1/namespaces/dev4/serviceaccounts/pod-labeler
uid: 702dadda-8393-11ea-abd9-0a768ca51346
secrets:
- name: pod-labeler-token-6vgp7
kind: List
metadata:
resourceVersion: ""
selfLink: ""
I am using sed to upate the file to be applied to a cluster, my sed command is like below:
sed -i \'/uid: \\|selfLink: \\|resourceVersion: \\|creationTimestamp: /d\' test1-serviceaccounts.yaml
However, I get the YAML file as like below:
apiVersion: v1
items:
- apiVersion: v1
kind: ServiceAccount
metadata:
name: default
namespace: dev4
secrets:
- name: default-token-tl4dd
- apiVersion: v1
kind: ServiceAccount
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"ServiceAccount","metadata":{"annotations":{},"name":"pod-labeler","namespace":"dev4"}}
name: pod-labeler
namespace: dev4
secrets:
- name: pod-labeler-token-6vgp7
kind: List
metadata:
How should I modify my sed expression to get rid of the last line metadata:
Note: It should delete the last line.
I did try the below sed command however, it produces an error:
Updated sed command:
sed -i '/uid: \|selfLink: \|resourceVersion: \|creationTimestamp: /$ d' 128-res-test-black-dev4-serviceaccounts.yaml
Error from executing the above command:
sed: -e expression #1, char 60: unknown command: `$'
The target file should look like this:
apiVersion: v1
items:
- apiVersion: v1
kind: ServiceAccount
metadata:
name: default
namespace: dev4
secrets:
- name: default-token-tl4dd
- apiVersion: v1
kind: ServiceAccount
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"ServiceAccount","metadata":{"annotations":{},"name":"pod-labeler","namespace":"dev4"}}
name: pod-labeler
namespace: dev4
secrets:
- name: pod-labeler-token-6vgp7
kind: List

You should really try to use a dedicated yaml parser such as yq but if for what ever reason you cannot use such as tool, you can remove the last line with the following sed statement:
sed -rni '$!p' 28-res-test-black-dev4-serviceaccounts.yaml
Do not print the last line of the output (!p) and print everything else.
This can then be integrated with the existing sed command:
sed -i '/uid: \|selfLink: \|resourceVersion: \|creationTimestamp: /$ d' 128-res-test-black-dev4-serviceaccounts.yaml && sed -rni '$!p' 28-res-test-black-dev4-serviceaccounts.yaml

Related

Append Workload Identity using GCP Config connector

I am initially creating an IAMPolicy with some bindings
apiVersion: iam.cnrm.cloud.google.com/v1beta1
kind: IAMPolicy
metadata:
name: storage-admin-policy
namespace: cnrm-system
annotations:
argocd.argoproj.io/sync-wave: "2"
cnrm.cloud.google.com/project-id: "mysten-sui"
spec:
resourceRef:
apiVersion: iam.cnrm.cloud.google.com/v1beta1
kind: IAMServiceAccount
name: storage-admin
namespace: cnrm-system
bindings:
- role: roles/iam.workloadIdentityUser
members:
- serviceAccount:mysten-sui.svc.id.goog[monitoring/thanos-compactor]
But I would want to append some new entries of workloadIdentityUser, for which I was using
apiVersion: iam.cnrm.cloud.google.com/v1beta1
kind: IAMPartialPolicy
metadata:
name: ax001-sa-policy
namespace: ax001
spec:
resourceRef:
apiVersion: iam.cnrm.cloud.google.com/v1beta1
kind: IAMServiceAccount
name: storage-admin
namespace: cnrm-system
bindings:
- role: roles/iam.workloadIdentityUser
members:
- member: serviceAccount:mysten-sui.svc.id.goog[ax001/ax001-prometheus]
But it seems like the entries are not being appended, instead, they are being overridden. Is there any way that I can append rather than overriding?

kubernetes role and serviceaccount grant checking

I created a serviceaccount, role and rolebinding with all grants for pods but when I checking with auth command, it is showing as 'No' to me. Any suggestion? I shared all the yamls here
λ kl get sa -n power -oyaml
apiVersion: v1
items:
- apiVersion: v1
kind: ServiceAccount
metadata:
creationTimestamp: "2022-04-04T00:31:09Z"
name: default
namespace: power
resourceVersion: "26979"
uid: a0f88ce5-e036-4631-aa6d-a0b14583f6a7
secrets:
- name: default-token-dnmfs
- apiVersion: v1
kind: ServiceAccount
metadata:
creationTimestamp: "2022-04-04T00:48:48Z"
name: ngn-sa
namespace: power
resourceVersion: "28498"
uid: 3aec6beb-e3c3-4e0c-a15d-fc9b0ac5d1b8
secrets:
- name: ngn-sa-token-j7xrv
kind: List
metadata:
resourceVersion: ""
selfLink: ""
λ kl get role ngn-ro -n power -oyaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
creationTimestamp: "2022-04-04T00:49:47Z"
name: ngn-ro
namespace: power
resourceVersion: "28583"
uid: 2facc03d-50a4-44be-bd38-dd6797615e70
rules:
- apiGroups:
- ""
resources:
- pods
verbs:
- '*'
λ kl get rolebinding ngn-rb -n power -oyaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
creationTimestamp: "2022-04-04T00:50:30Z"
name: ngn-rb
namespace: power
resourceVersion: "28651"
uid: cd08f258-24b7-4331-9f5c-cec47d9e4fa3
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: ngn-ro
subjects:
- kind: ServiceAccount
name: ngn-sa
namespace: power
λ kl auth can-i create pods --as ngn-sa -n power
no
I am expecting a 'Yes' here but it seems to not giving me correct auth value. Please suggest what I am missing here?

Using the rollout restart command in cronjob, in GKE

I want to periodically restart the deployment using k8s cronjob.
Please check what is the problem with the yaml file.
When I execute the command from the local command line, the deployment restarts normally, but it seems that the restart is not possible with cronjob.
e.g $ kubectl rollout restart deployment my-ingress -n my-app
my cronjob yaml file
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: deployment-restart
namespace: my-app
spec:
schedule: '0 8 */60 * *'
jobTemplate:
spec:
backoffLimit: 2
activeDeadlineSeconds: 600
template:
spec:
serviceAccountName: deployment-restart
restartPolicy: Never
containers:
- name: kubectl
image: bitnami/kubectl:latest
command:
- 'kubectl'
- 'rollout'
- 'restart'
- 'deployment/my-ingress -n my-app'
as David suggested run cron of kubectl is like by executing the command
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: hello
spec:
schedule: "*/5 * * * *"
jobTemplate:
spec:
template:
spec:
serviceAccountName: sa-jp-runner
containers:
- name: hello
image: bitnami/kubectl:latest
command:
- /bin/sh
- -c
- kubectl rollout restart deployment my-ingress -n my-app
restartPolicy: OnFailure
i would also suggest you to check the role and service account permissions
example for ref :
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: default
name: kubectl-cron
rules:
- apiGroups:
- extensions
- apps
resources:
- deployments
verbs:
- 'patch'
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: kubectl-cron
namespace: default
subjects:
- kind: ServiceAccount
name: sa-kubectl-cron
namespace: default
roleRef:
kind: Role
name: kubectl-cron
apiGroup: ""
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: sa-kubectl-cron
namespace: default
---

Patch namespace and change resource kind for a role binding

Using kustomize, I'd like to set namespace field for all my objects.
Here is my kustomization.yaml:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
patchesJson6902:
- patch: |-
- op: replace
path: /kind
value: RoleBinding
target:
group: rbac.authorization.k8s.io
kind: ClusterRoleBinding
name: manager-rolebinding
version: v1
resources:
- role_binding.yaml
namespace: <NAMESPACE>
Here is my resource file: role_binding.yaml:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: manager-rolebinding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: manager-role
subjects:
- kind: ServiceAccount
name: controller-manager
namespace: system
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: controller-manager
namespace: system
spec:
selector:
matchLabels:
control-plane: controller-manager
replicas: 1
template:
metadata:
labels:
control-plane: controller-manager
spec:
containers:
- command:
- /manager
args:
- --enable-leader-election
image: controller:latest
name: manager
And kustomize output:
$ kustomize build
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: manager-rolebinding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: manager-role
subjects:
- kind: ServiceAccount
name: controller-manager
namespace: system
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: controller-manager
namespace: <NAMESPACE>
spec:
replicas: 1
selector:
matchLabels:
control-plane: controller-manager
template:
metadata:
labels:
control-plane: controller-manager
spec:
containers:
- args:
- --enable-leader-election
command:
- /manager
image: controller:latest
name: manager
How can I patch the namespace field in the RoleBinding and set it to <NAMESPACE>? In above example, it works perfectly for the Deployment resource but not for the RoleBinding.
Here is a solution which solves the issue, using kustomize-v4.0.5:
cat <<EOF > kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
patchesJson6902:
- patch: |-
- op: replace
path: /kind
value: RoleBinding
- op: add
path: /metadata/namespace
value:
<NAMESPACE>
target:
group: rbac.authorization.k8s.io
kind: ClusterRoleBinding
name: manager-rolebinding
version: v1
resources:
- role_binding.yaml
- service_account.yaml
namespace: <NAMESPACE>
EOF
cat <<EOF > role_binding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: manager-rolebinding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: manager-role
subjects:
- kind: ServiceAccount
name: controller-manager
namespace: system
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: controller-manager
namespace: system
spec:
selector:
matchLabels:
control-plane: controller-manager
replicas: 1
template:
metadata:
labels:
control-plane: controller-manager
spec:
containers:
- command:
- /manager
args:
- --enable-leader-election
image: controller:latest
name: manager
EOF
cat <<EOF > service_account.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: controller-manager
namespace: system
EOF
Adding the ServiceAccount resource, and the Namespace field in the RoleBinding resource allows to correctly set the subject field in the RoleBinding.
Looking directly from the code:
// roleBindingHack is a hack for implementing the namespace transform
// for RoleBinding and ClusterRoleBinding resource types.
// RoleBinding and ClusterRoleBinding have namespace set on
// elements of the "subjects" field if and only if the subject elements
// "name" is "default". Otherwise the namespace is not set.
//
// Example:
//
// kind: RoleBinding
// subjects:
// - name: "default" # this will have the namespace set
// ...
// - name: "something-else" # this will not have the namespace set
// ...
The ServiceAccount and the reference on the ClusterRoleBinding needs to have "default" as namespace or otherwise it won't be replaced.
Check the example below:
$ cat <<EOF > my-resources.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: my-service-account
namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: my-cluster-role-binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: my-clusterrole
subjects:
- kind: ServiceAccount
name: my-service-account
namespace: default
EOF
$ cat <<EOF > kustomization.yaml
namespace: foo-namespace
namePrefix: foo-prefix-
resources:
- my-resources.yaml
EOF
$ kustomize build
apiVersion: v1
kind: ServiceAccount
metadata:
name: foo-prefix-my-service-account
namespace: foo-namespace
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: foo-prefix-my-cluster-role-binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: my-clusterrole
subjects:
- kind: ServiceAccount
name: foo-prefix-my-service-account
namespace: foo-namespace

Use variable in a patchesJson6902 of a kustomization.yaml file

I would like to set the name field in a Namespace resource and also replace the namespace field in a Deployment resource with the same value, for example my-namespace.
Here is kustomization.json:
namespace: <NAMESPACE>
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
patchesJson6902:
- patch: |-
- op: replace
path: /metadata/name
value: <NAMESPACE>
target:
kind: Namespace
name: system
version: v1
resources:
- manager.yaml
and manager.yaml:
apiVersion: v1
kind: Namespace
metadata:
labels:
control-plane: controller-manager
name: system
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: controller-manager
namespace: system
spec:
selector:
matchLabels:
control-plane: controller-manager
replicas: 1
template:
metadata:
labels:
control-plane: controller-manager
spec:
containers:
- command:
- /manager
args:
- --enable-leader-election
image: controller:latest
name: manager
I tried using kustomize edit set namespace my-namespace && kustomize build, but it only changes the namespace field in the Deployment object.
Is there a way to change both field without using sed, in 'pure' kustomize and without having to change manually value in kustomization.json?
Is there a way to change both field without using sed, in 'pure' kustomize and without having to change manually value in kustomization.json?
I managed to achieve somewhat similar with the following configuration:
kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: my-namespace
resources:
- deployment.yaml
depyloment.yaml
---
apiVersion: v1
kind: Namespace
metadata:
name: nginx
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
namespace: nginx
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: nginx:1.14.2
name: nginx
ports:
- containerPort: 80
And here is the output of the command that you used:
➜ kustomize kustomize edit set namespace my-namespace7 && kustomize build .
apiVersion: v1
kind: Namespace
metadata:
name: my-namespace7
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
namespace: my-namespace7
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: nginx:1.14.2
name: nginx
ports:
- containerPort: 80
What is happening here is that once you set the namespace globally in kustomization.yaml it will apply it to your targets which looks to me that looks an easier way to achieve what you want.
I cannot test your config without manager_patch.yaml content. If you wish to go with your way further you will have update the question with the file content.