I'm trying to patch multiple targets, of different types (let's say deployment and a replicaset) using the kubectl command, i've made the following file with all the patch info:
patch_list_changes.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: custom-metric-sd
namespace: default
spec:
template:
spec:
containers:
- name: sd-dummy-exporter
resources:
requests:
cpu: 90m
---
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: frontend
namespace: default
spec:
template:
spec:
containers:
- name: php-redis
resources:
requests:
cpu: 200m
i've tried the following commands in the terminal but nothing allows my to patch to work:
> kubectl patch -f patch_list_changes.yaml --patch-file patch_list_changes.yaml
deployment.apps/custom-metric-sd patched
Error from server (BadRequest): the name of the object (custom-metric-sd) does not match the name on the URL (frontend)
and
> kubectl apply -f patch_list_changes.yaml
error: error validating "patch_list_changes.yaml": error validating data: [ValidationError(Deployment.spec.template.spec): unknown field "resources" in io.k8s.api.core.v1.PodSpec, ValidationError(Deployment.spec): missing required field "selector" in io.k8s.api.apps.v1.DeploymentSpec]; if you choose to ignore these errors, turn validation off with --validate=false```
is there any way to run multiple patches in a single command?
The appropriate approach is to use Kustomization for this purposes
https://github.com/nirgeier/KubernetesLabs/tree/master/Labs/08-Kustomization
Based upon those samples I wrote:
Example:
Prepare your patches and use Kustomization
# kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
bases:
- ../../_base
patchesStrategicMerge:
- patch-memory.yaml
- patch-replicas.yaml
- patch-service.yaml
Related
I'm trying to deploy a Kubernetes processor to a cluster on GCP GKE but the pod fails with the following error:
secret "service-account-credentials-dbt-test" not found: CreateContainerConfigError
This is my deployment.yaml file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: dbt-core-processor
namespace: prod
labels:
app: dbt-core
spec:
replicas: 1
selector:
matchLabels:
app: dbt-core
template:
metadata:
labels:
app: dbt-core
spec:
containers:
- name: dbt-core-processor
image: IMAGE
resources:
requests:
cpu: 50m
memory: 1Gi
limits:
cpu: 1
memory: 2Gi
env:
- name: GOOGLE_APPLICATION_CREDENTIALS
valueFrom:
secretKeyRef:
name: service-account-credentials-dbt-test
key: service-account-credentials-dbt-test
---
apiVersion: kubernetes-client.io/v1
kind: ExternalSecret
metadata:
name: service-account-credentials-dbt-test
namespace: prod
spec:
backendType: gcpSecretsManager
data:
- key: service-account-credentials-dbt-test
name: service-account-credentials-dbt-test
version: latest
When I run kubectl apply -f deployment.yml I get the following error:
deployment.apps/dbt-core-processor created
error: unable to recognize "deployment.yml": no matches for kind "ExternalSecret" in version "kubernetes-client.io/v1"
This creates my processor but the pod fails to spin up the secrets:
secret "service-account-credentials-dbt-test" not found: CreateContainerConfigError
How do I add the secrets from my secrets manager in GCP to this deployment?
ExternalSecret is a custom resource definition (CRD) and it looks like it is not installed on your cluster.
I googled kubernetes-client.io/v1 and it looks like you may be following instructions from the old, archived project that first provided this CRD? The GitHub repo pointed me to a maintained project that has replaced it.
The good news is that the current project has what looks like comprehensive documentation, including a guide to how to install the CRDs on your cluster and the proper configuration for the External secret.
I have the file example-workflow-cowsay.yml:
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
generateName: hello-world-
spec:
entrypoint: whalesay
templates:
- name: whalesay
container:
image: docker/whalesay
command: [cowsay]
args: ["hello world"]
resources:
limits:
memory: 32Mi
cpu: 100m
I can submit this successfully like this: argo submit -n workflows apps/workflows/example-workflow-cowsay.yml.
Can I get the same thing done using kubectl directly? I tried the below but it fails:
$ k apply -n workflows -f apps/workflows/example-workflow-cowsay.yml
error: from hello-world-: cannot use generate name with apply
Yes, it's right there in the readme (version at the time of answering).
kubectl -n workflows create -f apps/workflows/example-workflow-cowsay.yml did the job.
To elaborate a bit: This makes sense, as what I was trying to "apply" was a single run of a workflow (think an object instance rather than a class). If I'd tried to apply a CronWorkflow, then kubectl apply would have worked. The error message that I got:
error: from hello-world-: cannot use generate name with apply
Told me about it, but I didn't understand it at the time. This is invalid:
apiVersion: argoproj.io/v1alpha1
kind: CronWorkflow
metadata:
generateName: some-name
...
But this is valid:
apiVersion: argoproj.io/v1alpha1
kind: CronWorkflow
metadata:
name: some-name
...
I have a deployment which includes a configMap, persistentVolumeClaim, and a service. I have changed the configMap and re-applied the deployment to my cluster. I understand that this change does not automatically restart the pod in the deployment:
configmap change doesn't reflect automatically on respective pods
Updated configMap.yaml but it's not being applied to Kubernetes pods
I know that I can kubectl delete -f wiki.yaml && kubectl apply -f wiki.yaml. But that destroys the persistent volume which has data I want to survive the restart. How can I restart the pod in a way that keeps the existing volume?
Here's what wiki.yaml looks like:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: dot-wiki
spec:
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
resources:
requests:
storage: 4Gi
---
apiVersion: v1
kind: ConfigMap
metadata:
name: wiki-config
data:
config.json: |
{
"farm": true,
"security_type": "friends",
"secure_cookie": false,
"allowed": "*"
}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: wiki-deployment
spec:
replicas: 1
selector:
matchLabels:
app: wiki
template:
metadata:
labels:
app: wiki
spec:
securityContext:
runAsUser: 1000
runAsGroup: 1000
fsGroup: 1000
initContainers:
- name: wiki-config
image: dobbs/farm:restrict-new-wiki
securityContext:
runAsUser: 0
runAsGroup: 0
allowPrivilegeEscalation: false
volumeMounts:
- name: dot-wiki
mountPath: /home/node/.wiki
command: ["chown", "-R", "1000:1000", "/home/node/.wiki"]
containers:
- name: farm
image: dobbs/farm:restrict-new-wiki
command: [
"wiki", "--config", "/etc/config/config.json",
"--admin", "bad password but memorable",
"--cookieSecret", "any-random-string-will-do-the-trick"]
ports:
- containerPort: 3000
volumeMounts:
- name: dot-wiki
mountPath: /home/node/.wiki
- name: config-templates
mountPath: /etc/config
volumes:
- name: dot-wiki
persistentVolumeClaim:
claimName: dot-wiki
- name: config-templates
configMap:
name: wiki-config
---
apiVersion: v1
kind: Service
metadata:
name: wiki-service
spec:
ports:
- name: http
targetPort: 3000
port: 80
selector:
app: wiki
In addition to kubectl rollout restart deployment, there are some alternative approaches to do this:
1. Restart Pods
kubectl delete pods -l app=wiki
This causes the Pods of your Deployment to be restarted, in which case they read the updated ConfigMap.
2. Version the ConfigMap
Instead of naming your ConfigMap just wiki-config, name it wiki-config-v1. Then when you update your configuration, just create a new ConfigMap named wiki-config-v2.
Now, edit your Deployment specification to reference the wiki-config-v2 ConfigMap instead of wiki-config-v1:
apiVersion: apps/v1
kind: Deployment
# ...
volumes:
- name: config-templates
configMap:
name: wiki-config-v2
Then, reapply the Deployment:
kubectl apply -f wiki.yaml
Since the Pod template in the Deployment manifest has changed, the reapplication of the Deployment will recreate all the Pods. And the new Pods will use the new version of the ConfigMap.
As an additional advantage of this approach, if you keep the old ConfigMap (wiki-config-v1) around rather than deleting it, you can revert to a previous configuration at any time by just editing the Deployment manifest again.
This approach is described in Chapter 1 of Kubernetes Best Practices (O'Reilly, 2019).
For the specific question about restarting containers after the configuration is changed, as of kubectl v1.15 you can do this:
# apply the config changes
kubectl apply -f wiki.yaml
# restart the containers in the deployment
kubectl rollout restart deployment wiki-deployment
You should do nothing but change your ConfigMap, and wait for the changes to be applies. The answer you have posted the link is wrong. After a ConfigMap change, it doesn't apply the changes right away, but can take time. Like 5 minutes, or something like that.
If that doesn't happen, you can report a bug about that specific version of k8s.
I'm new to Kubernetes. In my project I'm trying to use Kustomize to generate configMaps for my deployment. Kustomize adds a hash after the configMap name, but I can't get it to also change the deployment to use that new configMap name.
Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: env-receiver-deployment
labels:
app: env-receiver-app
project: env-project
spec:
replicas: 1
selector:
matchLabels:
app: env-receiver-app
template:
metadata:
labels:
app: env-receiver-app
project: env-project
spec:
containers:
- name: env-receiver-container
image: eu.gcr.io/influxdb-241011/env-receiver:latest
resources: {}
ports:
- containerPort: 8080
envFrom:
- configMapRef:
name: env-receiver-config
args: [ "-port=$(ER_PORT)", "-dbaddr=$(ER_DBADDR)", "-dbuser=$(ER_DBUSER)", "-dbpass=$(ER_DBPASS)" ]
kustomize.yml:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
configMapGenerator:
- name: env-receiver-config
literals:
- ER_PORT=8080
- ER_DBADDR=http://localhost:8086
- ER_DBUSER=writeuser
- ER_DBPASS=writeuser
Then I run kustomize, apply the deployment and check if it did apply the environment.
$ kubectl apply -k .
configmap/env-receiver-config-258g858mgg created
$ kubectl apply -f k8s/deployment.yml
deployment.apps/env-receiver-deployment unchanged
$ kubectl describe pod env-receiver-deployment-76c678dcf-5r2hl
Name: env-receiver-deployment-76c678dcf-5r2hl
[...]
Environment Variables from:
env-receiver-config ConfigMap Optional: false
Environment: <none>
[...]
But it still gets its environment variables from: env-receiver-config, not env-receiver-config-258g858mgg.
My current workaround is to disable the hash suffixes in the kustomize.yml.
generatorOptions:
disableNameSuffixHash: true
It looks like I'm missing a step to tell the deployment the name of the new configMap. What is it?
It looks like the problem come from the fact that you generate the config map through kustomize but the deployment via kubectl directly without using kustomize.
Basically, kustomize will look for all the env-receiver-config in all your resources and replace them by the hash suffixed version.
For it to work, all your resources have to go through kustomize.
To do so, you need to add to your kustomization.yml:
resources:
- yourDeployment.yml
and then just run kubectl apply -k .. It should create both the ConfigMap and the Deployment using the right ConfigMap name
I'm trying to run kubectl -f pod.yaml but getting this error. Any hint?
error: error validating "/pod.yaml": error validating data: [ValidationError(Pod): unknown field "imagePullSecrets" in io.k8s.api.core.v1.Pod, ValidationError(Pod): unknown field "nodeSelector" in io.k8s.api.core.v1.Pod, ValidationError(Pod): unknown field "tasks" in io.k8s.api.core.v1.Pod]; if you choose to ignore these errors, turn validation off with --validate=false
pod.yaml:
apiVersion: v1
kind: Pod
metadata:
name: gpu-pod-10.0.1
namespace: e6a5089f-8e9e-4647-abe3-b8d775079565
spec:
containers:
- name: main
image: bded587f4604
imagePullSecrets: ["testo", "awsecr-cred"]
nodeSelector:
kubernetes.io/hostname: 11-4730
tasks:
- name: traind
command: et estimate -e v/lat/exent_sps/enet/default_sql.spec.txt -r /out
completions: 1
inputs:
datasets:
- name: poa
version: 2018-
mountPath: /in/0
You have an indentation error on your pod.yaml definition with imagePullSecrets and you need to specify the - name: for your imagePullSecrets. Should be something like this:
apiVersion: v1
kind: Pod
metadata:
name: gpu-test-test-pod-10.0.1.11-e8b74730
namespace: test-e6a5089f-8e9e-4647-abe3-b8d775079565
spec:
containers:
- name: main
image: test.io/tets/maglev-test-bded587f4604
imagePullSecrets:
- name: testawsecr-cred
...
Note that imagePullSecrets: is plural and an array so you can specify multiple credentials to multiple registries.
If you are using Docker you can also specify multiple credentials in ~/.docker/config.json.
If you have the same credentials in imagePullSecrets: and configs in ~/.docker/config.json, the credentials are merged.