error: unable to recognize "mongo-statefulset.yaml": no matches for kind "StatefulSet" in version "apps/v1beta1" - mongodb

https://codelabs.developers.google.com/codelabs/cloud-mongodb-statefulset/index.html?index=..%2F..index#5
error: unable to recognize "mongo-statefulset.yaml": no matches for kind "StatefulSet" in version "apps/v1beta1"
The following command causes the above response in google cloud shell:
kubectl apply -f mongo-statefulset.yaml
I am working on deploying the stateful set MongoDB sidecar and following the instructions to a t in this demo but I received the following error. Does anyone have an explanation for the error? Or know a way to deploy stateful set mongo db in gke?
mongo-statefulset.yaml
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: mongo
spec:
serviceName: "mongo"
replicas: 3
template:
metadata:
labels:
role: mongo
environment: test
spec:
terminationGracePeriodSeconds: 10
containers:
- name: mongo
image: mongo
command:
- mongod
- "--replSet"
- rs0
- "--smallfiles"
- "--noprealloc"
ports:
- containerPort: 27017
volumeMounts:
- name: mongo-persistent-storage
mountPath: /data/db
- name: mongo-sidecar
image: cvallance/mongo-k8s-sidecar
env:
- name: MONGO_SIDECAR_POD_LABELS
value: "role=mongo,environment=test"
volumeClaimTemplates:
- metadata:
name: mongo-persistent-storage
annotations:
volume.beta.kubernetes.io/storage-class: "fast"
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 100Gi
Changing the yaml file with v1 in apiVersion returns a similar error:
error: unable to recognize "mongo-statefulset.yaml": no matches for kind "StatefulSet" in version "v1"

Explanation:
apiVersion of particular kubernetes resource is subject to change in time. The fact that once StatefulSet used apiVersion: apps/v1beta1 doesn't mean that it will use it forever.
As dany L already suggested in his answer, apps/v1beta1 most probably has already been deprecated in the version of kubernetes you're using. The mentioned changes in supported api versions were introduced quite a long time ago (date of publication of the article is Thursday, July 18, 2019) so chances are that you're using a version newer than 1.15.
I'd like to put it also in the body of the answer to make it clearly visible without a need to additionally searching through the docs. So what actually happen when 1.16 was released was (among other major changes):
StatefulSet in the apps/v1beta1 and apps/v1beta2 API versions
is no longer served
Migrate to use the apps/v1 API version, available since v1.9. Existing persisted data can be retrieved/updated via the new
version.
Solution:
Workaround, create your cluster with 1.15 version and it should work.
As to the above workaround I agree that this is a workaround and of course it will work but this is not a recommended approach. New kubernetes versions are developed and released for some reason. Apart from changes in APIs, many other important things are introduced, bugs are fixed, existing funcionalities are improved as well as new ones are added. So there is no point in using something that soon will be deprecated anyway.
Instead of that you should use the currently supported/required apiVersion for the particular kubernetes resource, in your case StatefulSet.
The easiest way to check what apiVersion supports StatefulSet in your kubernetes installation is by running:
kubectl explain statefulset
which will tell you (among many other interesting things) which apiVersion it is supposed to use:
KIND: StatefulSet
VERSION: apps/v1
...
To summarize your particular case:
If you're using kubernetes newer than 1.15, edit your mongo-statefulset.yaml and replace apiVersion: apps/v1beta1 with currently supported apiVersion: apps/v1.

Possible explanation.
Workaround, create your cluster with 1.15 version and it should work.

Related

Kubectl error upon applying agones fleet: ensure CRDs are installed first

I am using minikube (docker driver) with kubectl to test an agones fleet deployment. Upon running kubectl apply -f lobby-fleet.yml (and when I try to apply any other agones yaml file) I receive the following error:
error: resource mapping not found for name: "lobby" namespace: "" from "lobby-fleet.yml": no matches for kind "Fleet" in version "agones.dev/v1"
ensure CRDs are installed first
lobby-fleet.yml:
apiVersion: "agones.dev/v1"
kind: Fleet
metadata:
name: lobby
spec:
replicas: 2
scheduling: Packed
template:
metadata:
labels:
mode: lobby
spec:
ports:
- name: default
portPolicy: Dynamic
containerPort: 7600
container: lobby
template:
spec:
containers:
- name: lobby
image: gcr.io/agones-images/simple-game-server:0.12 # Modify to correct image
I am running this on WSL2, but receive the same error when using the windows installation of kubectl (through choco). I have minikube installed and running for ubuntu in WSL2 using docker.
I am still new to using k8s, so apologies if the answer to this question is clear, I just couldn't find it elsewhere.
Thanks in advance!
In order to create a resource of kind Fleet, you have to apply the Custom Resource Definition (CRD) that defines what is a Fleet first.
I've looked into the YAML installation instructions of agones, and the manifest contains the CRDs. you can find it by searching kind: CustomResourceDefinition.
I recommend you to first try to install according to the instructions in the docs.

Best practice on reading and writing to a persistent Volume in a live Kubernetes Cluster

I am designing a Kubernetes system which will require storing audio files. To do this I would like to setup a persistent storage volume making use of a stateful set.
I have found a few tutorials on how to set something like this up, but I am unsure once I have created it how to read/write the files. What would be the best approach to do this. I will be using a flask app, but if I could just get a high level approach then I can find the exact libraries myself.
Not acknowledging on facts how it should be implemented programming wise and the specific tuning for dealing with audio files, you can use your Persistent Volume the same as you would read/write data to a directory (as correctly pointed by user #zerkms in the comments).
Answering this specific part of the question:
but I am unsure once I have created it how to read/write the files.
Assuming that you've created your StatefulSet in a following way:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: ubuntu-sts
spec:
selector:
matchLabels:
app: ubuntu # has to match .spec.template.metadata.labels
serviceName: "ubuntu"
replicas: 1
template:
metadata:
labels:
app: ubuntu
spec:
terminationGracePeriodSeconds: 10
containers:
- name: ubuntu
image: ubuntu
command:
- sleep
- "infinity"
volumeMounts:
- name: audio-volume
mountPath: /audio
volumeClaimTemplates:
- metadata:
name: audio-volume
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: standard
resources:
requests:
storage: 1Gi
Take a look on below part (it's showing where your Volume will be mounted):
volumeMounts:
- name: audio-volume
mountPath: /audio # <-- HERE!
Disclaimer!
This example is having the 1:1 Pod to Volume relation. If your use case is different you will need to refer to the Kubernetes documentation about accessModes.
You can exec into this Pod to look how you can further develop your application:
$ kubectl exec -it ubuntu-sts-0 -- /bin/bash
$ echo "Hello from your /audio directory!" > /audio/hello.txt
$ cat /audio/hello.txt
root#ubuntu-sts-0:/# cat /audio/hello.txt
Hello from your /audio directory!
A side note!
If it happens that you are using the cloud-provider managed Kubernetes cluster like GKE, EKS or AKS, please refer to it's documentation about storage options.
I encourage you to check the official documentation on Persistent Volumes:
Kubernetes.io: Docs: Concepts: Storage: Persistent Volumes
Also, please take a look on documentation regarding Statefulset:
Kubernetes.io: Docs: Concepts: Workloads: Controllers: Statefulset
Additional resources:
Guru99.com: Reading and writing files in Python

Kubernetes persistent volume data corrupted after multiple pod deletions

I am stuggling with a simple one replica deployment of the official event store image on a Kubernetes cluster. I am using a persistent volume for the data storage.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-eventstore
spec:
strategy:
type: Recreate
replicas: 1
template:
metadata:
labels:
app: my-eventstore
spec:
imagePullSecrets:
- name: runner-gitlab-account
containers:
- name: eventstore
image: eventstore/eventstore
env:
- name: EVENTSTORE_DB
value: "/usr/data/eventstore/data"
- name: EVENTSTORE_LOG
value: "/usr/data/eventstore/log"
ports:
- containerPort: 2113
- containerPort: 2114
- containerPort: 1111
- containerPort: 1112
volumeMounts:
- name: eventstore-storage
mountPath: /usr/data/eventstore
volumes:
- name: eventstore-storage
persistentVolumeClaim:
claimName: eventstore-pv-claim
And this is the yaml for my persistent volume claim:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: eventstore-pv-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
The deployments work fine. It's when I tested for durability that I started to encounter a problem. I delete a pod to force actual state from desired state and see how Kubernetes reacts.
It immediately launched a new pod to replace the deleted one. And the admin UI was still showing the same data. But after deleting a pod for the second time, the new pod did not come up. I got an error message that said "record too large" that indicated corrupted data according to this discussion. https://groups.google.com/forum/#!topic/event-store/gUKLaxZj4gw
I tried again for a couple of times. Same result every time. After deleting the pod for the second time the data is corrupted. This has me worried that an actual failure will cause similar result.
However, when deploying new versions of the image or scaling the pods in the deployment to zero and back to one no data corruption occurs. After several tries everything is fine. Which is odd since that also completely replaces pods (I checked the pod id's and they changed).
This has me wondering if deleting a pod using kubectl delete is somehow more forcefull in the way that a pod is terminated. Do any of you have similar experience? Of insights on if/how delete is different? Thanks in advance for your input.
Regards,
Oskar
I was refered to this pull request on Github that stated the the proces was not killed properly: https://github.com/EventStore/eventstore-docker/pull/52
After building a new image with the Docker file from the pull request put this image in the deployment. I am killing pods left and right, no data corruption issues anymore.
Hope this helps someone facing the same issue.

How to parameterize image version when passing yaml for container creation

Is there any way to pass image version from a varibale/config when passing a manifest .yaml to kubectl command
Example :
apiVersion: v1
kind: ReplicationController
metadata:
name: nginx
spec:
replicas: 1
selector:
app: nginx
template:
metadata:
name: nginx
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:${IMAGE_VERSION}
imagePullPolicy: Always
resources:
limits:
cpu: "1.2"
memory: 100Mi
ports:
- containerPort: 80
Use case is to launch specific image version which is set at kubernetes level, and that the variable is resolved by kubernetes itself at server side.
Thanks and Regards,
Ravi
k8s manifest files are static yaml/json.
If you would like to template the manifests (and manage multiple resources in a bundle-like fashion), I strongly recommend you to have a look at Helm
I've recently created a Workshop which focuses precisely on the "Templating" features of Helm.
Helm does a lot more than just templating, it is built as a full fledged package manager for Kubernetes applications (think Apt/Yum/Homebrew).
If you want to handle everything client side, have a look at https://github.com/errordeveloper/kubegen
Although, at some point, you will need the other features of Helm and a migration will be needed when that time comes - I recommend biting the bullet and going for Helm straight up.
After looking into this recently we decided to just go with sed. Wrap kubectl apply into a small bash script and replace the placeholders before running apply.
We did look into more sophisticated tooling but we only found Helm. However Helm is a complex piece of technology that does way more than just templating. It changes your workflow a lot as you no longer deploy using kubectl and have to have a Helm package repo around to push your packages to. Our assessment was that Helm is not useful for deploying our application and using it for just the templating is overkill.
Here is an example how to do this with sed (it is a excerpt from my typical circleci config):
replaces="s/{.Namespace}/$CIRCLE_BRANCH/;";
replaces="$replaces s/{.CiBuild}/$CIRCLE_BUILD_NUM/; ";
replaces="$replaces s/{.CiCommit}/$CIRCLE_SHA1/; ";
replaces="$replaces s/{.CiUser}/$CIRCLE_USERNAME/; ";
cat ./k8s/app.yaml | sed -e "$replaces" | ./kubectl --kubeconfig=`pwd`/.kube/config apply --namespace=$NAMESPACE -f -

Kubernetes rolling update in case of secret update

I have a Replication Controller with one replica using a secret. How can I update or recreate its (lone) pod—without downtime—with latest secret value when the secret value is changed?
My current workaround is increasing number of replicas in the Replication Controller, deleting the old pods, and changing the replica count back to its original value.
Is there a command or flag to induce a rolling update retaining the same container image and tag? When I try to do so, it rejects my attempt with the following message:
error: Specified --image must be distinct from existing container image
A couple of issues #9043 and #13488 describe the problem reasonably well, and I suspect a rolling update approach will eventuate shortly (like most things in Kubernetes), though unlikely for 1.3.0. The same issue applies with updating ConfigMaps.
Kubernetes will do a rolling update whenever anything in the deployment pod spec is changed (eg. typically image to a new version), so one suggested workaround is to set an env variable in your deployment pod spec (eg. RESTART_)
Then when you've updated your secret/configmap, bump the env value in your deployment (via kubectl apply, or patch, or edit), and Kubernetes will start a rolling update of your deployment.
Example Deployment spec:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: test-nginx
spec:
replicas: 2
template:
metadata:
spec:
containers:
- name: nginx
image: "nginx:stable"
ports:
- containerPort: 80
- mountPath: /etc/nginx/conf.d
name: config
readOnly: true
- mountPath: /etc/nginx/auth
name: tokens
readOnly: true
env:
- name: RESTART_
value: "13"
volumes:
- name: config
configMap:
name: test-nginx-config
- name: tokens
secret:
secretName: test-nginx-tokens
Two tips:
your environment variable name can't start with an _ or it magically disappears somehow.
if you use a number for your restart variable you need to wrap it in quotes
If I understand correctly, Deployment should be what you want.
Deployment supports rolling update for almost all fields in the pod template.
See http://kubernetes.io/docs/user-guide/deployments/