I have the file example-workflow-cowsay.yml:
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
generateName: hello-world-
spec:
entrypoint: whalesay
templates:
- name: whalesay
container:
image: docker/whalesay
command: [cowsay]
args: ["hello world"]
resources:
limits:
memory: 32Mi
cpu: 100m
I can submit this successfully like this: argo submit -n workflows apps/workflows/example-workflow-cowsay.yml.
Can I get the same thing done using kubectl directly? I tried the below but it fails:
$ k apply -n workflows -f apps/workflows/example-workflow-cowsay.yml
error: from hello-world-: cannot use generate name with apply
Yes, it's right there in the readme (version at the time of answering).
kubectl -n workflows create -f apps/workflows/example-workflow-cowsay.yml did the job.
To elaborate a bit: This makes sense, as what I was trying to "apply" was a single run of a workflow (think an object instance rather than a class). If I'd tried to apply a CronWorkflow, then kubectl apply would have worked. The error message that I got:
error: from hello-world-: cannot use generate name with apply
Told me about it, but I didn't understand it at the time. This is invalid:
apiVersion: argoproj.io/v1alpha1
kind: CronWorkflow
metadata:
generateName: some-name
...
But this is valid:
apiVersion: argoproj.io/v1alpha1
kind: CronWorkflow
metadata:
name: some-name
...
Related
I'm trying to patch multiple targets, of different types (let's say deployment and a replicaset) using the kubectl command, i've made the following file with all the patch info:
patch_list_changes.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: custom-metric-sd
namespace: default
spec:
template:
spec:
containers:
- name: sd-dummy-exporter
resources:
requests:
cpu: 90m
---
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: frontend
namespace: default
spec:
template:
spec:
containers:
- name: php-redis
resources:
requests:
cpu: 200m
i've tried the following commands in the terminal but nothing allows my to patch to work:
> kubectl patch -f patch_list_changes.yaml --patch-file patch_list_changes.yaml
deployment.apps/custom-metric-sd patched
Error from server (BadRequest): the name of the object (custom-metric-sd) does not match the name on the URL (frontend)
and
> kubectl apply -f patch_list_changes.yaml
error: error validating "patch_list_changes.yaml": error validating data: [ValidationError(Deployment.spec.template.spec): unknown field "resources" in io.k8s.api.core.v1.PodSpec, ValidationError(Deployment.spec): missing required field "selector" in io.k8s.api.apps.v1.DeploymentSpec]; if you choose to ignore these errors, turn validation off with --validate=false```
is there any way to run multiple patches in a single command?
The appropriate approach is to use Kustomization for this purposes
https://github.com/nirgeier/KubernetesLabs/tree/master/Labs/08-Kustomization
Based upon those samples I wrote:
Example:
Prepare your patches and use Kustomization
# kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
bases:
- ../../_base
patchesStrategicMerge:
- patch-memory.yaml
- patch-replicas.yaml
- patch-service.yaml
I am trying out mock exams on udemy and have created a multi container pod . but exam result says command is not set correctly on container test2 .I am not able to identify the issue.
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: multi-pod
name: multi-pod
spec:
containers:
- image: nginx
name: test1
env:
- name: type
value: demo1
- image: busybox
name: test2
env:
- name: type
value: demo2
command: ["sleep", "4800"]
An easy way to do this is by using imperative kubectl command to generate the yaml for a single container and edit the yaml to add the other container
kubectl run nginx --image=nginx --command -oyaml --dry-run=client -- sh -c 'sleep 1d' > nginx.yaml
In this example sleep 1d is the command.
The generated yaml looks like below.
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: nginx
name: nginx
spec:
containers:
- command:
- sh
- -c
- sleep 1d
image: nginx
name: nginx
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}
Your issue is with your YAML in line 19.
Please keep in mind that YAML syntax is very sensitive for spaces and tabs.
Your issue:
- image: busybox
name: test2
env:
- name: type
value: demo2 ### Issue is in this line, you have one extra space
command: ["sleep", "4800"]
Solution:
Remove space, it wil looks like that:
env:
- name: type
value: demo2
For validation of YAML you can use external validators like yamllint.
If you would paste your YAML to mentioned validator, you will receive error:
(<unknown>): mapping values are not allowed in this context at line 19 column 14
After removing this extra space you will get
Valid YAML!
I'm new to Kubernetes. In my project I'm trying to use Kustomize to generate configMaps for my deployment. Kustomize adds a hash after the configMap name, but I can't get it to also change the deployment to use that new configMap name.
Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: env-receiver-deployment
labels:
app: env-receiver-app
project: env-project
spec:
replicas: 1
selector:
matchLabels:
app: env-receiver-app
template:
metadata:
labels:
app: env-receiver-app
project: env-project
spec:
containers:
- name: env-receiver-container
image: eu.gcr.io/influxdb-241011/env-receiver:latest
resources: {}
ports:
- containerPort: 8080
envFrom:
- configMapRef:
name: env-receiver-config
args: [ "-port=$(ER_PORT)", "-dbaddr=$(ER_DBADDR)", "-dbuser=$(ER_DBUSER)", "-dbpass=$(ER_DBPASS)" ]
kustomize.yml:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
configMapGenerator:
- name: env-receiver-config
literals:
- ER_PORT=8080
- ER_DBADDR=http://localhost:8086
- ER_DBUSER=writeuser
- ER_DBPASS=writeuser
Then I run kustomize, apply the deployment and check if it did apply the environment.
$ kubectl apply -k .
configmap/env-receiver-config-258g858mgg created
$ kubectl apply -f k8s/deployment.yml
deployment.apps/env-receiver-deployment unchanged
$ kubectl describe pod env-receiver-deployment-76c678dcf-5r2hl
Name: env-receiver-deployment-76c678dcf-5r2hl
[...]
Environment Variables from:
env-receiver-config ConfigMap Optional: false
Environment: <none>
[...]
But it still gets its environment variables from: env-receiver-config, not env-receiver-config-258g858mgg.
My current workaround is to disable the hash suffixes in the kustomize.yml.
generatorOptions:
disableNameSuffixHash: true
It looks like I'm missing a step to tell the deployment the name of the new configMap. What is it?
It looks like the problem come from the fact that you generate the config map through kustomize but the deployment via kubectl directly without using kustomize.
Basically, kustomize will look for all the env-receiver-config in all your resources and replace them by the hash suffixed version.
For it to work, all your resources have to go through kustomize.
To do so, you need to add to your kustomization.yml:
resources:
- yourDeployment.yml
and then just run kubectl apply -k .. It should create both the ConfigMap and the Deployment using the right ConfigMap name
We recently started using istio Istio to establish a service-mesh within out Kubernetes landscape.
We now have the problem that jobs and cronjobs do not terminate and keep running forever if we inject the istio istio-proxy sidecar container into them. The istio-proxy should be injected though to establish proper mTLS connections to the services the job needs to talk to and comply with our security regulations.
I also noticed the open issues within Istio (istio/issues/6324) and kubernetes (kubernetes/issues/25908), but both do not seem to provide a valid solution anytime soon.
At first a pre-stop hook seemed suitable to solve this issue, but there is some confusion about this conecpt itself: kubernetes/issues/55807
lifecycle:
preStop:
exec:
command:
...
Bottomline: Those hooks will not be executed if the the container successfully completed.
There are also some relatively new projects on GitHub trying to solve this with a dedicated controller (which I think is the most preferrable approach), but to our team they do not feel mature enough to put them right away into production:
k8s-controller-sidecars
K8S-job-sidecar-terminator
In the meantime, we ourselves ended up with the following workaround that execs into the sidecar and sends a SIGTERM signal, but only if the main container finished successfully:
apiVersion: v1
kind: ServiceAccount
metadata:
name: terminate-sidecar-example-service-account
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: terminate-sidecar-example-role
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get","delete"]
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["create"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: terminate-sidecar-example-rolebinding
subjects:
- kind: ServiceAccount
name: terminate-sidecar-example-service-account
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: terminate-sidecar-example-role
---
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: terminate-sidecar-example-cronjob
labels:
app: terminate-sidecar-example
spec:
schedule: "30 2 * * *"
jobTemplate:
metadata:
labels:
app: terminate-sidecar-example
spec:
template:
metadata:
labels:
app: terminate-sidecar-example
annotations:
sidecar.istio.io/inject: "true"
spec:
serviceAccountName: terminate-sidecar-example-service-account
containers:
- name: ****
image: ****
command:
- "/bin/ash"
- "-c"
args:
- node index.js && kubectl exec -n ${POD_NAMESPACE} ${POD_NAME} -c istio-proxy -- bash -c "sleep 5 && /bin/kill -s TERM 1 &"
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
So, the ultimate question to all of you is: Do you know of any better workaround, solution, controller, ... that would be less hacky / more suitable to terminate the istio-proxy container once the main container finished its work?
- command:
- /bin/sh
- -c
- |
until curl -fsI http://localhost:15021/healthz/ready; do echo \"Waiting for Sidecar...\"; sleep 3; done;
echo \"Sidecar available. Running the command...\";
<YOUR_COMMAND>;
x=$(echo $?); curl -fsI -X POST http://localhost:15020/quitquitquit && exit $x
Update: sleep loop can be omitted if holdApplicationUntilProxyStarts is set to true (globally or as an annotation) starting with istio 1.7
This was not a misconfiguration, this was a bug in upstream Kubernetes. As of September of 2019, this has been resolved by Istio by introducing a /quitquitquit endpoint to the Pilot agent.
Unfortunately, Kubernetes has not been so steadfast in solving this issue themselves. So it still does exist in some facets. However, the /quitquitquit endpoint in Istio should have resolved the problem for this specific use case.
I have found a work around by editing the configmap of istio-sidecar-injector as per the link Istio documentation
https://istio.io/docs/setup/additional-setup/sidecar-injection/
apiVersion: v1
kind: ConfigMap
metadata:
name: istio-sidecar-injector
data:
config: |-
policy: enabled
neverInjectSelector:
- matchExpressions:
- {key: job-name, operator: Exists}
But with this changes in our cronjob sidecar will not inject and istio policy will not apply on the cronjob job, and in our case we dont want any policy to be enforced by istio
Note :- job-name is by default label gets added in the pod creation
For those for whom curl is a luxury my wget version of the Dimitri's code:
command:
- /bin/sh
- -c
- |
until wget -q --spider http://127.0.0.1:15021/healthz/ready 2>/dev/null; do echo "Waiting for Istio sidecar..."; sleep 3; done;
echo \"Sidecar available. Running...\";
<COMMAND>;
x=$?; wget -q --post-data='' -S -O /dev/null http://127.0.0.1:15020/quitquitquit && exit $x
I'm trying to run kubectl -f pod.yaml but getting this error. Any hint?
error: error validating "/pod.yaml": error validating data: [ValidationError(Pod): unknown field "imagePullSecrets" in io.k8s.api.core.v1.Pod, ValidationError(Pod): unknown field "nodeSelector" in io.k8s.api.core.v1.Pod, ValidationError(Pod): unknown field "tasks" in io.k8s.api.core.v1.Pod]; if you choose to ignore these errors, turn validation off with --validate=false
pod.yaml:
apiVersion: v1
kind: Pod
metadata:
name: gpu-pod-10.0.1
namespace: e6a5089f-8e9e-4647-abe3-b8d775079565
spec:
containers:
- name: main
image: bded587f4604
imagePullSecrets: ["testo", "awsecr-cred"]
nodeSelector:
kubernetes.io/hostname: 11-4730
tasks:
- name: traind
command: et estimate -e v/lat/exent_sps/enet/default_sql.spec.txt -r /out
completions: 1
inputs:
datasets:
- name: poa
version: 2018-
mountPath: /in/0
You have an indentation error on your pod.yaml definition with imagePullSecrets and you need to specify the - name: for your imagePullSecrets. Should be something like this:
apiVersion: v1
kind: Pod
metadata:
name: gpu-test-test-pod-10.0.1.11-e8b74730
namespace: test-e6a5089f-8e9e-4647-abe3-b8d775079565
spec:
containers:
- name: main
image: test.io/tets/maglev-test-bded587f4604
imagePullSecrets:
- name: testawsecr-cred
...
Note that imagePullSecrets: is plural and an array so you can specify multiple credentials to multiple registries.
If you are using Docker you can also specify multiple credentials in ~/.docker/config.json.
If you have the same credentials in imagePullSecrets: and configs in ~/.docker/config.json, the credentials are merged.