Define a standalone patch as YAML - kubernetes

I have a need to define a standalone patch as YAML.
More specifically, I want to do the following:
kubectl patch serviceaccount default -p '{"imagePullSecrets": [{"name": "registry-my-registry"}]}'
The catch is I can't use kubectl patch. I'm using a GitOps workflow with flux, and that resource I want to patch is a default resource created outside of flux.
In other terms, I need to do the same thing as the command above but with kubectl apply only:
kubectl apply patch.yaml
I wasn't able to figure out if you can define such a patch.
The key bit is that I can't predict the name of the default secret token on a new cluster (as the name is random, i.e. default-token-uudge)

Fields set and deleted from Resource Config are merged into Resources by Kubectl apply:
If a Resource already exists, Apply updates the Resources by merging the
local Resource Config into the remote Resources
Fields removed from the Resource Config will be deleted from the remote Resource
You can learn more about Kubernetes Field Merge Semantics.
If your limitation is not knowing the secret default-token-xxxxx name, no problem, just keep that field out of your yaml.
As long as the yaml has enough fields to identify the target resource (name, kind, namespace) it will add/edit the fields you set.
I created a cluster (minikube in this example, but it could be any) and retrieved the current default serviceAccount:
$ kubectl get serviceaccount default -o yaml
apiVersion: v1
kind: ServiceAccount
metadata:
creationTimestamp: "2020-07-01T14:51:38Z"
name: default
namespace: default
resourceVersion: "330"
selfLink: /api/v1/namespaces/default/serviceaccounts/default
uid: a9e5ff4a-8bfb-466f-8873-58c2172a5d11
secrets:
- name: default-token-j6zx2
Then, we create a yaml file with the content's that we want to add:
$ cat add-image-pull-secrets.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: default
namespace: default
imagePullSecrets:
- name: registry-my-registry
Now we apply and verify:
$ kubectl apply -f add-image-pull-secrets.yaml
serviceaccount/default configured
$ kubectl get serviceaccount default -o yaml
apiVersion: v1
imagePullSecrets:
- name: registry-my-registry
kind: ServiceAccount
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","imagePullSecrets":[{"name":"registry-my-registry2"}],"kind":"ServiceAccount","metadata":{"annotations":{},"name":"default","namespace":"default"}}
creationTimestamp: "2020-07-01T14:51:38Z"
name: default
namespace: default
resourceVersion: "2382"
selfLink: /api/v1/namespaces/default/serviceaccounts/default
uid: a9e5ff4a-8bfb-466f-8873-58c2172a5d11
secrets:
- name: default-token-j6zx2
As you can see, the ImagePullPolicy was added to the resource.
I hope it fits your needs. If you have any further questions let me know in the comments.

Let say, your service account YAML looks like bellow:
$ kubectl get sa demo -o yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: demo
namespace: default
secrets:
- name: default-token-uudge
Now, you want to add or change the imagePullSecrets for that service account. To do so, edit the YAML file and add imagePullSecrets.
apiVersion: v1
kind: ServiceAccount
metadata:
name: demo
namespace: default
secrets:
- name: default-token-uudge
imagePullSecrets:
- name: myregistrykey
And finally, apply the changes:
$ kubectl apply -f service-account.yaml

Related

How to run kubectl within a job in a namespace?

Hi I saw this documentation where kubectl can run inside a pod in the default pod.
Is it possible to run kubectl inside a Job resource in a specified namespace?
Did not see any documentation or examples for the same..
When I tried adding serviceAccounts to the container i got the error:
Error from server (Forbidden): pods is forbidden: User "system:serviceaccount:my-namespace:internal-kubectl" cannot list resource "pods" in API group "" in the namespace "my-namespace"
This was when i tried sshing into the container and running the kubctl.
Editing the question.....
As I mentioned earlier, based on the documentation I had added the service Accounts, Below is the yaml:
apiVersion: v1
kind: ServiceAccount
metadata:
name: internal-kubectl
namespace: my-namespace
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: modify-pods
namespace: my-namespace
rules:
- apiGroups: [""]
resources:
- pods
verbs:
- get
- list
- delete
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: modify-pods-to-sa
namespace: my-namespace
subjects:
- kind: ServiceAccount
name: internal-kubectl
roleRef:
kind: Role
name: modify-pods
apiGroup: rbac.authorization.k8s.io
---
apiVersion: batch/v1
kind: Job
metadata:
name: testing-stuff
namespace: my-namespace
spec:
template:
metadata:
name: testing-stuff
spec:
serviceAccountName: internal-kubectl
containers:
- name: tester
image: bitnami/kubectl
command:
- "bin/bash"
- "-c"
- "kubectl get pods"
restartPolicy: Never
On running the job, I get the error:
Error from server (Forbidden): pods is forbidden: User "system:serviceaccount:my-namespace:internal-kubectl" cannot list resource "pods" in API group "" in the namespace "my-namespace"
Is it possible to run kubectl inside a Job resource in a specified namespace? Did not see any documentation or examples for the same..
A Job creates one or more Pods and ensures that a specified number of them successfully terminate. It means the permission aspect is the same as in a normal pod, meaning that yes, it is possible to run kubectl inside a job resource.
TL;DR:
Your yaml file is correct, maybe there were something else in your cluster, I recommend deleting and recreating these resources and try again.
Also check the version of your Kubernetes installation and job image's kubectl version, if they are more than 1 minor-version apart, you may have unexpected incompatibilities
Security Considerations:
Your job role's scope is the best practice according to documentation (specific role, to specific user on specific namespace).
If you use a ClusterRoleBinding with the cluster-admin role it will work, but it's over permissioned, and not recommended since it's giving full admin control over the entire cluster.
Test Environment:
I deployed your config on a kubernetes 1.17.3 and run the job with bitnami/kubectl and bitnami/kubectl:1:17.3. It worked on both cases.
In order to avoid incompatibility, use the kubectl with matching version with your server.
Reproduction:
$ cat job-kubectl.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: testing-stuff
namespace: my-namespace
spec:
template:
metadata:
name: testing-stuff
spec:
serviceAccountName: internal-kubectl
containers:
- name: tester
image: bitnami/kubectl:1.17.3
command:
- "bin/bash"
- "-c"
- "kubectl get pods -n my-namespace"
restartPolicy: Never
$ cat job-svc-account.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: internal-kubectl
namespace: my-namespace
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: modify-pods
namespace: my-namespace
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list", "delete"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: modify-pods-to-sa
namespace: my-namespace
subjects:
- kind: ServiceAccount
name: internal-kubectl
roleRef:
kind: Role
name: modify-pods
apiGroup: rbac.authorization.k8s.io
I created two pods just to add output to the log of get pods.
$ kubectl run curl --image=radial/busyboxplus:curl -i --tty --namespace my-namespace
the pod is running
$ kubectl run ubuntu --generator=run-pod/v1 --image=ubuntu -n my-namespace
pod/ubuntu created
Then I apply the job, ServiceAccount, Role and RoleBinding
$ kubectl get pods -n my-namespace
NAME READY STATUS RESTARTS AGE
curl-69c656fd45-l5x2s 1/1 Running 1 88s
testing-stuff-ddpvf 0/1 Completed 0 13s
ubuntu 0/1 Completed 3 63s
Now let's check the testing-stuff pod log to see if it logged the command output:
$ kubectl logs testing-stuff-ddpvf -n my-namespace
NAME READY STATUS RESTARTS AGE
curl-69c656fd45-l5x2s 1/1 Running 1 76s
testing-stuff-ddpvf 1/1 Running 0 1s
ubuntu 1/1 Running 3 51s
As you can see, it has succeeded running the job with the custom ServiceAccount.
Let me know if you have further questions about this case.
Create service account like this.
apiVersion: v1
kind: ServiceAccount
metadata:
name: internal-kubectl
Create ClusterRoleBinding using this.
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: modify-pods-to-sa
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: internal-kubectl
Now create pod with same config that are given at Documentation.
When you use kubectl from the pod for any operation such as getting pod or creating roles and role bindings it will use the default service account. This service account don't have permission to perform those operations by default. So you need to
create a service account, role and rolebinding using a more privileged account.You should have a kubeconfig file with admin privilege or admin like privilege. Use that kubeconfig file with kubectl from outside the pod to create the service account, role, rolebinding etc.
After that is done create pod by specifying that service account and you should be able perform operations which are defined in the role from within this pod using kubectl and the service account.
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
serviceAccountName: internal-kubectl

Kubernetes Ansible Operators - Patch an Existing Kubernetes Resource

With ansible: is it possible to patch resources with json or yaml snippets? I basically want to be able to accomplish the same thing as kubectl patch <Resource> <Name> --type='merge' -p='{"spec":{ "test":"hello }}', to append/modify resource specs.
https://docs.ansible.com/ansible/latest/modules/k8s_module.html
Is it possible to do this with the k8s ansible module? It says that if a resource already exists and "status: present" is set that it will patch it, however it isn't patching as far as I can tell
Thanks
Yes, you can provide just a patch and if the resource already exists it should send a strategic-merge-patch (or just a merge-patch if it's a custom resource). Here's an example playbook that creates and modifies a configmap:
---
- hosts: localhost
connection: local
gather_facts: no
vars:
cm: "{{ lookup('k8s',
api_version='v1',
kind='ConfigMap',
namespace='default',
resource_name='test') }}"
tasks:
- name: Create the ConfigMap
k8s:
definition:
apiVersion: v1
kind: ConfigMap
metadata:
name: test
namespace: default
data:
hello: world
- name: We will see the ConfigMap defined above
debug:
var: cm
- name: Add a field to the ConfigMap (this will be a PATCH request)
k8s:
definition:
apiVersion: v1
kind: ConfigMap
metadata:
name: test
namespace: default
data:
added: field
- name: The same ConfigMap as before, but with an extra field in data
debug:
var: cm
- name: Change a field in the ConfigMap (this will be a PATCH request)
k8s:
definition:
apiVersion: v1
kind: ConfigMap
metadata:
name: test
namespace: default
data:
hello: everyone
- name: The added field is unchanged, but the hello field has a new value
debug:
var: cm
- name: Delete the added field in the ConfigMap (this will be a PATCH request)
k8s:
definition:
apiVersion: v1
kind: ConfigMap
metadata:
name: test
namespace: default
data:
added: null
- name: The hello field is unchanged, but the added field is now gone
debug:
var: cm

Can Kubernetes Cronjobs Reuse Environment Variables From Existing Deployment?

I am using a Kubernetes Cronjob to run period database restores and post restore scripts which runs against the target environment which include tasks such as working with the database, redis, and file system.
The issue I am facing is that I have to re-define all the environment variables I use in my Deployment within the Cronjob (E.g., DATABASE_NAME, DATABASE_PASSWORD, REDIS_HOST etc.).
While repeating all the environment variables works, it is error prone as I have already forgotten to update the jobs which results in me having to re-run the entire process which takes 2-4 hours to run depending on what environment.
Is there a way to reference an existing Deployment and re-use the defined environment variables within my Cronjob?
You can use 'kind: PodPreset' object to define and inject comman env variables into multiple kuberentes objects like deployments/statefulsets/pods/replicasets etc.
Follow the link for help --> https://kubernetes.io/docs/tasks/inject-data-application/podpreset/
I don't think so you can reuse environment variables until it is coming from secrets or configmaps.So if you don't want to use secrets for non sensitive data then you can use configmaps as like below
kubectl create configmap redis-uname --from-literal=username=jp
[root#master ~]# kubectl get cm redis-uname -o yaml
apiVersion: v1
data:
username: jp
kind: ConfigMap
metadata:
creationTimestamp: "2019-11-28T21:38:18Z"
name: redis-uname
namespace: default
resourceVersion: "1090703"
selfLink: /api/v1/namespaces/default/configmaps/redis-uname
uid: 1a9e3cce-50b1-448b-8bae-4b2c6ccb6861
[root#master ~]#
[root#master ~]# echo -n 'K8sCluster!' | base64
SzhzQ2x1c3RlciE=
[root#master ~]# cat redis-sec.yaml
apiVersion: v1
kind: Secret
metadata:
name: redissecret
data:
password: SzhzQ2x1c3RlciE=
[root#master ~]#
[root#master ~]# kubectl apply -f redis-sec.yaml
secret/redissecret created
[root#master ~]# kubectl get secret redissecret -o yaml
apiVersion: v1
data:
password: SzhzQ2x1c3RlciE=
kind: Secret
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","data":{"password":"SzhzQ2x1c3RlciE="},"kind":"Secret","metadata":{"annotations":{},"name":"redissecret","namespace":"default"}}
creationTimestamp: "2019-11-28T21:40:18Z"
name: redissecret
namespace: default
resourceVersion: "1090876"
selfLink: /api/v1/namespaces/default/secrets/redissecret
uid: 2b6acdcd-d7c6-4e50-bd0e-8c323804155b
type: Opaque
[root#master ~]#
apiVersion: v1
kind: Pod
metadata:
name: "redis-sec-env-pod"
spec:
containers:
- name: redis-sec-env-cn
image: "redis"
env:
- name: username
valueFrom:
configMapKeyRef:
name: redis-uname
key: username
- name: password
valueFrom:
secretKeyRef:
name: redissecret
key: password
[root#master ~]# kubectl apply -f reddis_sec_pod.yaml
pod/redis-sec-env-pod created
[root#master ~]# kubectl exec -it redis-sec-env-pod sh
# env|grep -i user
username=jp
# env|grep -i pass
password=K8sCluster!
#

create a custom resource in kubernetes using generateName field

I have a sample crd defined as
crd.yaml
kind: CustomResourceDefinition
metadata:
name: testconfig.demo.k8s.com
namespace: testns
spec:
group: demo.k8s.com
versions:
- name: v1
served: true
storage: true
scope: Namespaced
names:
plural: testconfigs
singular: testconfig
kind: TestConfig
I want to create a custom resource based on above crd but i dont want to assign a fixed name to the resource rather use the generateName field. So i generated the below cr.yaml. But when i apply it gives error that name field is mandatory
kind: TestConfig
metadata:
generateName: test-name-
namespace: testns
spec:
image: testimage
Any help is highly appreciated.
You should use kubectl create to create your CR with generateName.
"kubectl apply will verify the existence of the resources before take action. If the resources do not exist, it will firstly create them. If use generateName, the resource name is not yet generated when verify the existence of the resource." source

how does K8S handles multiple remote docker registeries in POD definition using imagePullSecrets list

I would like to access multiple remote registries to pull images.
In the k8s documentation they say:
(If you need access to multiple registries, you can create one secret
for each registry. Kubelet will merge any imagePullSecrets into a
single virtual .docker/config.json)
and so the POD definition should be something like this:
apiVersion: v1
kind: Pod
spec:
containers:
- name: ...
imagePullSecrets:
- name: secret1
- name: secret2
- ....
- name: secretN
Now I am not sure how K8S will pick the right secret for each image? will all secrets be verified one by one each time? and how K8S will handle the failed retries? and if a specific amount of unauthorized retries could lead to some lock state in k8sor docker registries?
/ Thanks
You can use following script to add two authentications in one secret
#!/bin/bash
u1="user_1_here"
p1="password_1_here"
auth1=$(echo -n "$u1:$p1" | base64 -w0)
u2="user_2_here"
p2="password_2_here"
auth2=$(echo -n "$u2:$p2" | base64 -w0)
cat <<EOF > docker_config.json
{
"auths": {
"repo1_name_here": {
"auth": "$auth1"
},
"repo2_name_here": {
"auth": "$auth2"
}
}
}
EOF
base64 -w0 docker_config.json > docker_config_b64.json
cat <<EOF | kubectl apply -f -
apiVersion: v1
type: kubernetes.io/dockerconfigjson
kind: Secret
data:
.dockerconfigjson: $(cat docker_config_b64.json)
metadata:
name: specify_secret_name_here
namespace: specify_namespace_here
EOF
Kubernetes isn't going to try all secrets until find the correct. When you create the secret, you are referencing that it's a docker registry:
$ kubectl create secret docker-registry user1-secret --docker-server=https://index.docker.io/v1/ --docker-username=user1 --docker-password=PASSWORD456 --docker-email=user1#email.com
$ kubectl create secret docker-registry user2-secret --docker-server=https://index.docker.io/v1/ --docker-username=user2 --docker-password=PASSWORD123 --docker-email=user2#email.com
$ kubectl get secrets user1-secret -o yaml
apiVersion: v1
data:
.dockerconfigjson: eyJhdXRocyI6eyJkb2NrZXIuZXhhbXBsZS5jb20iOnsidXNlcm5hbWUiOiJrdWJlIiwicGFzc3dvcmQiOiJQV19TVFJJTkciLCJlbWFpbCI6Im15QGVtYWlsLmNvbSIsImF1dGgiOiJhM1ZpWlRwUVYxOVRWRkpKVGtjPSJ9fX0=
kind: Secret
metadata:
creationTimestamp: "2020-01-13T13:15:52Z"
name: user1-secret
namespace: default
resourceVersion: "1515301"
selfLink: /api/v1/namespaces/default/secrets/user1-secret
uid: d2f3bb0c-3606-11ea-a202-42010a8000ad
type: kubernetes.io/dockerconfigjson
As you can see, type is kubernetes.io/dockerconfigjson is telling Kubernetes to treat this differently.
So, when you reference the address of your container as magic.example.com/magic-image on your yaml, Kubernetes will have enough information to connect the dots and use the right secret to pull your image.
apiVersion: v1
kind: Pod
metadata:
name: busyboxes
namespace: default
spec:
imagePullSecrets:
- name: user1-secret
- name: user2-secret
containers:
- name: jenkins
image: user1/jenkins
imagePullPolicy: Always
- name: busybox
image: user2/busybox
imagePullPolicy: Always
So as this example describes, it's possible to have 2 or more docker registry secrets with the same --docker-server value. Kubernetes will manage to take care of it seamlessly.