I want to add new pod property in yaml file while creating pod in Kubernetes.
By looking at old properties I did all required changes in the kubernetes source code but I still get below parsing error:
error: error validating "podbox.yml": error validating data: found invalid field newproperty for v1.Pod
Example Pod yaml file :
apiVersion: v1
kind: Pod
metadata:
name: podbox
namespace: default
spec:
containers:
- image: busybox
command:
- sleep
- "3600"
imagePullPolicy: IfNotPresent
name: podbox
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "1"
restartPolicy: Always
newproperty: false
`newproperty`
not getting parsed while creating Pod.
Is there any specific changes required?
You don't want to add new fields to kind: Pod, because then your Kubernetes code will be on a fork and your config will be non-portable.
If you are planning a contribution to submit to the code Kubernetes code, you should first join the appropriate SIG (sig-node or sig-apps for Pod changes) and get support for your proposed change. Someone there can point you to example PRs that you can follow to add a field.
If you just need to put some extra information in a Pod that you or your own programs can parse, then use an annotation.
If you want to create a new type in your Kubernetes cluster, use a Custom Resource.
Just remove the line
newproperty: false
from your YAML and you should be fine.
As far as I know you should be declaring then inside data:
apiVersion: v1
kind: Pod
metadata:
name: podbox
namespace: default
data:
newproperty: false
if you want an enviroment variable to be passed to the docker use this structure:
....
containers:
- name: name
image: some_image
env:
- name: SOME_VAR
value: "Hello from the kubernetes"
....
Related
I have a pod with the following specs
kind: Pod
metadata:
name: busybox
namespace: default
spec:
containers:
- image: busybox
command:
- sleep
- "3600"
imagePullPolicy: IfNotPresent
name: busybox
env:
- name: WATCH_NAMESPACE
valueFrom:
configMapKeyRef:
name: watch-namespace-config
key: WATCH_NAMESPACE
restartPolicy: Always
I also created a ConfigMap
kubectl create configmap watch-namespace-config \
--from-literal=WATCH_NAMESPACE=dev
The pod looks for values in the watch-namespace-config configmap.
When I manually change the configmap values, I want the pod to restart automatically to reflect this change. Checking if that is possible in any way.
This is currently a feature in progress https://github.com/kubernetes/kubernetes/issues/22368
For now, use Reloader - https://github.com/stakater/Reloader
It watches if some change happens in ConfigMap and/or Secret; then performs a rolling upgrade on relevant DeploymentConfig, Deployment, Daemonset, Statefulset and Rollout
How to use it - https://github.com/stakater/Reloader#how-to-use-reloader
As you mentioned correctly once you update a ConfigMap or Secret the Deployment/Pod/Stateful set is not updated.
An optional solution for this scenario is to use Kustomization.
Kustomization generates a unique name every time you update the ConfigMap/Secret with a generated hash, for example: ConfigMap-xxxxxx.
If you will will use:
kubectl kustomize . | kubectl apply -f -
kubectl will "update" the changes with the new config map values.
Working Example(s) using Kustomization:
https://github.com/nirgeier/KubernetesLabs/tree/master/Labs/08-Kustomization
I'm seeing the following error when running a pod. I matched with the documentation in the Kubernetes webpage and it is the code is same as the one i have written below but Istill end up with the below error.
error validating data: the server could not find the requested resource; if you choose to ignore these errors, turn validation off with --validate=false
apiVersion: v1
kind: pod
metadata:
name: helloworld-deployment
labels:
app: helloworld
spec:
containers:
- name: helloworld
image: anishanil/kubernetes:node
ports:
containerPort: 3000
resources:
limits:
memory: "100Mi"
cpu: "100m"
Client Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.1", GitCommit:"b0b7a323cc5a4a2019b2e9520c21c7830b7f708e", GitTreeState:"clean", BuildDate:"2017-04-03T20:44:38Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.6+IKS", GitCommit:"44b769243cf9b3fe09c1105a4a8749e8ff5f4ba8", GitTreeState:"clean", BuildDate:"2019-08-21T12:48:49Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}
Any help is greatly appreciated
Thank you
I matched with the documentation in the Kubernetes webpage and it is
the code is same as the one i have written below...
Could you link the fragment of documentation with which you compare your code ? As other people already suggested in their answers and comments, your yaml is not valid. Are you sure you're not using some outdated tutorial or docs ?
Let's debug it together step by step:
When I use exactly the same code you posted in your question, the error message I got is quite different than the one you posted:
error: error parsing pod.yml: error converting YAML to JSON: yaml:
line 12: did not find expected key
OK, so let's go to mentioned line 12 and check where can be the problem:
11 ports:
12 containerPort: 3000
13 resources:
14 limits:
15 memory: "100Mi"
16 cpu: "100m"
Line 12 itself looks actually totally ok, so the problem should be elsewhere. Let's debug it further using this online yaml validator. It also suggests that this yaml is syntactically not correct however it pointed out different line:
(): did not find expected key while parsing a block mapping
at line 9 column 5
If you look carefully at the above quoted fragment of code, you may notice that the indentation level in line 13 looks quite strange. When you remove one unnecessary space right before resources ( it should be on the same level as ports ) yaml validador will tell you that your yaml syntax is correct. Although it may already be a valid yaml it does not mean that it is a valid input for Kubernetes which requires specific structure following certain rules.
Let's try it again... Now kubectl apply -f pod.yml returns quite different error:
Error from server (BadRequest): error when creating "pod.yml": pod in
version "v1" cannot be handled as a Pod: no kind "pod" is registered
for version "v1" in scheme
"k8s.io/kubernetes/pkg/api/legacyscheme/scheme.go:29"
Quick search will give you an answer to that as well. Proper value of kind: key is Pod but not pod.
Once we fixed that, let's run kubectl apply -f pod.yml again. Now it gives us back different error:
error: error validating "pod.yml": error validating data:
ValidationError(Pod.spec.containers[0].ports): invalid type for
io.k8s.api.core.v1.Container.ports: got "map", expected "array";
which is pretty self-explanatory and means that you are not supposed to use "map" in a place where an "array" was expected and the error message precisely pointed out where, namely:
Pod.spec.containers[0].ports.
Let's correct this fragment:
11 ports:
12 containerPort: 3000
In yaml formatting the - character implies the start of an array so it should look like this:
11 ports:
12 - containerPort: 3000
If we run kubectl apply -f pod.yml again, we finally got the expected message:
pod/helloworld-deployment created
The final, correct version of the Pod definition looks as follows:
apiVersion: v1
kind: Pod
metadata:
name: helloworld-deployment
labels:
app: helloworld
spec:
containers:
- name: helloworld
image: anishanil/kubernetes:node
ports:
- containerPort: 3000
resources:
limits:
memory: "100Mi"
cpu: "100m"
Your yaml has error. You can use a yaml validation tool to get it checked. Or use the below instead:
---
apiVersion: v1
kind: pod
metadata:
labels:
app: helloworld
name: helloworld-deployment
spec:
containers:
-
image: "anishanil/kubernetes:node"
name: helloworld
ports:
containerPort: 3000
resources:
limits:
cpu: 100m
memory: 100Mi
resources should be inline with image, name, ports in yaml definition. OR You can use below yaml.
apiVersion: v1
kind: pod
metadata:
labels:
app: helloworld
name: helloworld-deployment
spec:
containers:
- image: "anishanil/kubernetes:node"
name: helloworld
ports:
containerPort: 3000
resources:
limits:
cpu: 100m
memory: 100Mi
For someone that stumble on this because of some similar issues. I found a solution that worked for me in the answer below. I disregarded it first because there was no way it should solve the issue... but it did.
Solution is basically to check the box "Check for latest version" below advanced drop-down in the Kubectl config window or add the following line under Kubernetes task inputs:
checkLatest: true
Link to answer:
ADO: error validating data: the server could not find the requested
Which in turn links to this:
Release Agent job kubectl apply returns 'error validating data'
I'm using a ffmpeg docker image from a KubernetesPodOperator() inside Airflow for extracting frames from a video.
It works fine, but I am not able to retrieve the frames stored: how can store the frames generated by the Pod directly into my file system (host-machine)?
Update:
From https://airflow.apache.org/kubernetes.html# I think I figured out that I need to work on the volume_mount, volume_config and volume parameters, but still no luck.
Error message:
"message":"Not found: \"test-volume\"","field":"spec.containers[0].volumeMounts[0].name"
PV and PVC:
command kubectl get pv,pvc test-volume gives:
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/test-volume 10Gi RWO Retain Bound default/test-volume manual 3m
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/test-volume Bound test-volume 10Gi RWO manual 3m
Code:
volume_mount = VolumeMount('test-volume',
mount_path='/',
sub_path=None,
read_only=False)
volume_config= {
'persistentVolumeClaim':
{
'claimName': 'test-volume' # uses the persistentVolumeClaim given in the Kube yaml
}
}
volume = Volume(name="test-volume", configs=volume_config)
with DAG('test_kubernetes',
default_args=default_args,
schedule_interval=schedule_interval,
) as dag:
extract_frames = KubernetesPodOperator(namespace='default',
image="jrottenberg/ffmpeg:3.4-scratch",
arguments=[
"-i", "http://www.jell.yfish.us/media/jellyfish-20-mbps-hd-hevc-10bit.mkv",
"test_%04d.jpg"
],
name="extract-frames",
task_id="extract_frames",
volume=[volume],
volume_mounts=[volume_mount],
get_logs=True
)
Here's some speculation as to what may be wrong:
(Where your error is most likely coming from)
KubernetesPodOperator expects parameter "volumes", not "volume"
In general, it's bad practice to mount onto "/" since you will be deleting everything that comes on the image you're running. i.e. you should probably change "mount_path" in your VolumeMount object to something else like "/stored_frames"
You should create a test pod to verify your k8s objects (volumes, pod, configmap, secrets,etc) before wrapping that pod creation in the DAG with KubernetesPodOperator. Based from your code above, it can look like this:
apiVersion: v1
kind: Pod
metadata:
name: "extract-frames-pod"
namespace: "default"
spec:
containers:
- name: "extract-frames"
image: "jrottenberg/ffmpeg:3.4-scratch"
command:
args: ["-i", "http://www.jell.yfish.us/media/jellyfish-20-mbps-hd-hevc-10bit.mkv", "test_%04d.jpg"]
imagePullPolicy: IfNotPresent
volumeMounts:
- name: "test-volume"
# do not use "/" for mountPath.
mountPath: "/images"
restartPolicy: Never
volumes:
- name: "test-volume"
persistentVolumeClaim:
claimName: "test-volume"
serviceAccountName: default
I expect you will get the same error that you had: "message":"Not found: \"test-volume\"","field":"spec.containers[0].volumeMounts[0].name"
Which I think is an issue with your PersistentVolume manifest file.
Did you set the path test-volume? Something like:
path: /test-volume
and does the path exists in the target volume? If not create that directory/folder. That might solve your problem.
I am stuggling with a simple one replica deployment of the official event store image on a Kubernetes cluster. I am using a persistent volume for the data storage.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-eventstore
spec:
strategy:
type: Recreate
replicas: 1
template:
metadata:
labels:
app: my-eventstore
spec:
imagePullSecrets:
- name: runner-gitlab-account
containers:
- name: eventstore
image: eventstore/eventstore
env:
- name: EVENTSTORE_DB
value: "/usr/data/eventstore/data"
- name: EVENTSTORE_LOG
value: "/usr/data/eventstore/log"
ports:
- containerPort: 2113
- containerPort: 2114
- containerPort: 1111
- containerPort: 1112
volumeMounts:
- name: eventstore-storage
mountPath: /usr/data/eventstore
volumes:
- name: eventstore-storage
persistentVolumeClaim:
claimName: eventstore-pv-claim
And this is the yaml for my persistent volume claim:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: eventstore-pv-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
The deployments work fine. It's when I tested for durability that I started to encounter a problem. I delete a pod to force actual state from desired state and see how Kubernetes reacts.
It immediately launched a new pod to replace the deleted one. And the admin UI was still showing the same data. But after deleting a pod for the second time, the new pod did not come up. I got an error message that said "record too large" that indicated corrupted data according to this discussion. https://groups.google.com/forum/#!topic/event-store/gUKLaxZj4gw
I tried again for a couple of times. Same result every time. After deleting the pod for the second time the data is corrupted. This has me worried that an actual failure will cause similar result.
However, when deploying new versions of the image or scaling the pods in the deployment to zero and back to one no data corruption occurs. After several tries everything is fine. Which is odd since that also completely replaces pods (I checked the pod id's and they changed).
This has me wondering if deleting a pod using kubectl delete is somehow more forcefull in the way that a pod is terminated. Do any of you have similar experience? Of insights on if/how delete is different? Thanks in advance for your input.
Regards,
Oskar
I was refered to this pull request on Github that stated the the proces was not killed properly: https://github.com/EventStore/eventstore-docker/pull/52
After building a new image with the Docker file from the pull request put this image in the deployment. I am killing pods left and right, no data corruption issues anymore.
Hope this helps someone facing the same issue.
I have a Replication Controller with one replica using a secret. How can I update or recreate its (lone) pod—without downtime—with latest secret value when the secret value is changed?
My current workaround is increasing number of replicas in the Replication Controller, deleting the old pods, and changing the replica count back to its original value.
Is there a command or flag to induce a rolling update retaining the same container image and tag? When I try to do so, it rejects my attempt with the following message:
error: Specified --image must be distinct from existing container image
A couple of issues #9043 and #13488 describe the problem reasonably well, and I suspect a rolling update approach will eventuate shortly (like most things in Kubernetes), though unlikely for 1.3.0. The same issue applies with updating ConfigMaps.
Kubernetes will do a rolling update whenever anything in the deployment pod spec is changed (eg. typically image to a new version), so one suggested workaround is to set an env variable in your deployment pod spec (eg. RESTART_)
Then when you've updated your secret/configmap, bump the env value in your deployment (via kubectl apply, or patch, or edit), and Kubernetes will start a rolling update of your deployment.
Example Deployment spec:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: test-nginx
spec:
replicas: 2
template:
metadata:
spec:
containers:
- name: nginx
image: "nginx:stable"
ports:
- containerPort: 80
- mountPath: /etc/nginx/conf.d
name: config
readOnly: true
- mountPath: /etc/nginx/auth
name: tokens
readOnly: true
env:
- name: RESTART_
value: "13"
volumes:
- name: config
configMap:
name: test-nginx-config
- name: tokens
secret:
secretName: test-nginx-tokens
Two tips:
your environment variable name can't start with an _ or it magically disappears somehow.
if you use a number for your restart variable you need to wrap it in quotes
If I understand correctly, Deployment should be what you want.
Deployment supports rolling update for almost all fields in the pod template.
See http://kubernetes.io/docs/user-guide/deployments/