When I want to run the demo, I get the error:
error: error validating "cronJob_example.yaml": error validating data:
couldn't find type: v2alpha1.CronJob; if you choose to ignore these
errors, turn validation off with --validate=false
Then I found:
Prerequisites You need a working Kubernetes cluster at version >= 1.4
(for ScheduledJob), >= 1.5 (for CronJob), with batch/v2alpha1 API
turned on by passing --runtime-config=batch/v2alpha1=true while
bringing up the API server (see Turn on or off an API version for your
cluster for more).
The above conditions need to do --runtime-config=batch/v2alpha1=true, but I don't know where and how to execute it
Here is it documented. https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/ we need to enable this feature in API server.
on the master server you need to add the line command section in this file /etc/kubernetes/manifests/kube-apiserver.yaml. then restart whole cluster.
After restart check the api version. we should see the feature enabled.
kubectl api-versions |grep batch
batch/v1
batch/v2alpha1
Related
The page on server-side apply in the Kubernetes docs suggests that it can be enabled or disabled (e.g., the docs say, "If you have Server Side Apply enabled ...").
I have a GKE cluster and I would like to check if server-side apply is enabled. How can I do this?
You can try creating any object like namespace or so and try checking the YAML output using the command you will get an idea if SSA is enabled or not.
Command :
kubectl create ns test-ssa
Get the created namespace
kubectl get ns test-ssa -o yaml
If there is managedFields existing in output SSA is working.
Server-side-apply i think introduced around K8s version 1.14 and now it's in GA with k8s version 1.22. Wiht GKE i have noticed it's already been part of it alpha or beta.
If you are using the HELM on your GKE you might have noticed the Service Side Apply.
Is there some tool available that could tell me whether a K8s YAML configuration (to-be-supplied to kubectl apply) is valid for the target Kubernetes version without requiring a connection to a Kubernetes cluster?
One concrete use-case here would be to detect incompatibilities before actual deployment to a cluster, just because some already-deprecated label has been finally dropped in a newer Kubernetes version, e.g. as has happened for Helm and the switch to Kubernetes 1.16 (see Helm init fails on Kubernetes 1.16.0):
Dropped:
apiVersion: extensions/v1beta1
New:
apiVersion: apps/v1
I want to check these kind of incompatibilities within a CI system, so that I can reject it before even attempting to deploy it.
just run below command to validate the syntax
kubectl create -f <yaml-file> --dry-run
In fact the dry-run option is to validate the YAML syntax and the object schema. You can grab the output into a variable and if there is no error then rerun the command without dry-run
You could use kubeval
https://kubeval.instrumenta.dev/
I don't think kubectl support client-side only validation yet (02/2022)
wanted to know if there's any tool that can validate an openshift deployment. Let's say you have a deploy configuration file with different features (secrets, routes, services, environment variables, etc) and I want to validate after the deployment has finished and the POD/s is/are created in Openshift, that all those things are there as requested on the file. Like a tool for QA.
thanks
Readiness probe are there which can execute http requests on the pod to confirm its availability. Also it can execute commands to confirm desired resources are available within the container.
Readiness probe
There is a particular flag --dry-run in Kubernetes for resource creation which performs basic syntax verification and template object schema validation without real object implementation, therefore you can do the test for all underlying objects defined in the deployment manifest file.
I think it is also feasible to achieve through OpenShift client:
$ oc create -f deployment-app.yaml --dry-run
or
$ oc apply -f deployment-app.yaml --dry-run
You can find some useful OpenShift client commands in Developer CLI Operations documentation page.
For one time validation, you can create a Job (OpenShift) with Init Container (OpenShift) that ensures that all deployment process is done, and then run test/shell script with sequence of kubectl/curl/other commands to ensure that every piece of deployment are in place and in desired state.
For continuous validation, you can create a CronJob (OpenShift) that will periodically create a test Job and report the result somewhere.
This answer can help you to create all that stuff.
I am running a kubernetes cluster on google cloud(version 1.3.5) .
I found a redis.yaml
that uses petset to create a redis cluster but when i run kubectl create -f redis.yaml i get the following error :
error validating "redis.yaml": error validating data: the server could not find the requested resource (get .apps); if you choose to ignore these errors, turn validation off with --validate=false
i cant find why i get this error or how to solve this.
PetSet is currently an alpha feature (which you can tell because the apiVersion in the linked yaml file is apps/v1alpha1). It may not be obvious, but alpha features are not supported in Google Container Engine.
As described in api_changes.md, alpha level API objects are disabled by default, have no guarantees that they will exist in future versions, can break compatibility with older versions at any time, and may destabilize the cluster.
I'm using PetSet with some success, for example https://github.com/Yolean/kubernetes-mysql-cluster, in zone europe-west1-d but when I tried europe-west1-c I got the aforementioned error.
Google just enabled Alpha Clusters for GKE as announced here: https://cloud.google.com/container-engine/docs/alpha-clusters
Now you are able (but not SLA covered) to use all alpha features within an alpha cluster, what was disable previously.
I sent up a 4 node cluster (1 master 3 workers) running Kubernetes on Ubuntu. I turned on --authorization-mode=ABAC and set up a policy file with an entry like the following
{"user":"bob", "readonly": true, "namespace": "projectgino"}
I want user bob to only be able to look at resources in projectgino. I'm having problems using kubectl command line as user Bob. When I run the following command
kubectl get pods --token=xxx --namespace=projectgino --server=https://xxx.xxx.xxx.xx:6443
I get the following error
error: couldn't read version from server: the server does not allow access to the requested resource
I traced the kubectl command line code and the problem seems to caused by kubectl calling function NegotiateVersion in pkg/client/helper.go. This makes a call to /api on the server to get the version of Kubernetes. This call fails because the rest path doesn't contain namespace projectgino. I added trace code to pkg/auth/authorizer/abac/abac.go and it fails on the namespace check.
I haven't moved up the the latest 1.1.1 version of Kubernetes yet, but looking at the code I didn't see anything that has changed in this area.
Does anybody know how to configure Kubernetes to get around the problem?
This is missing functionality in the ABAC authorizer. The fix is in progress: #16148.
As for a workaround, from the authorization doc:
For miscellaneous endpoints, like
/version, the resource is the empty string.
So you may be able to solve by defining a policy:
{"user":"bob", "readonly": true, "resource": ""}
(note the empty string for resource) to grant access to unversioned endpoints. If that doesn't work I don't think there's a clean workaround that will let you use kubectl with --authorization-mode=ABAC.