I'm following that tutorial (https://www.baeldung.com/spring-boot-minikube)
I want to create Kubernetes deployment in yaml file (simple-crud-dpl.yaml):
apiVersion: apps/v1
kind: Deployment
metadata:
name: simple-crud
spec:
selector:
matchLabels:
app: simple-crud
replicas: 3
template:
metadata:
labels:
app: simple-crud
spec:
containers:
- name: simple-crud
image: simple-crud:latest
imagePullPolicy: Never
ports:
- containerPort: 8080
but when I run kubectl create -f simple-crud-dpl.yaml i got:
error: SchemaError(io.k8s.api.autoscaling.v2beta2.MetricTarget): invalid object doesn't have additional properties
I'm using the newest version of kubectl:
kubectl version
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.11", GitCommit:"637c7e288581ee40ab4ca210618a89a555b6e7e9", GitTreeState:"clean", BuildDate:"2018-11-26T14:38:32Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.0", GitCommit:"641856db18352033a0d96dbc99153fa3b27298e5", GitTreeState:"clean", BuildDate:"2019-03-25T15:45:25Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
I'm also using minikube locally as it's described in tutorial. Everything is working till deployment and service. I'm not able to do it.
After installing kubectl with brew you should run:
rm /usr/local/bin/kubectl
brew link --overwrite kubernetes-cli
And also optionally:
brew link --overwrite --dry-run kubernetes-cli.
I second #rennekon's answer. I found that I had docker running on my machine which also installs kubectl. That installation of kubectl causes this issue to show.
I took the following steps:
uninstalled it using brew uninstall kubectl
reinstalled it using brew install kubectl
(due to symlink creation failure) I forced brew to create symlinks using brew link --overwrite kubernetes-cli
I was then able to run my kubectl apply commands successfully.
I too had the same problem. In my Mac system kubectl is running from docker which is preinstalled when I install Docker. You can check this by using below command
ls -l $(which kubectl)
which returns as
/usr/local/bin/kubectl ->
/Applications/Docker.app/Contents/Resources/bin/kubectlcode.
Now we have to overwrite the symlink with kubectl which is installed using brew
rm /usr/local/bin/kubectl
brew link --overwrite kubernetes-cli
(optinal)
brew unlink kubernetes-cli && brew link kubernetes-cli
To Verify
ls -l $(which kubectl)
I encountered the same issue on minikube/ Windows 10 after installing Docker.
It was caused by the version mismatch of kubectl that was mentioned a couple of times already in this thread. Docker installs version 1.10 of kubectl.
You have a couple of options:
1) Make sure the path to your k8s bin is above the ones in docker
2) Replace the kubectl in 'c:\Program Files\Docker\Docker\resources\bin' with the correct one
Your client version is too old. In my env this version comes with Docker. I have to download new client from https://storage.googleapis.com/kubernetes-release/release/v1.14.0/bin/windows/amd64/kubectl.exe and now works fine:
kubectl version
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.0", GitCommit:"641856db18352033a0d96dbc99153fa3b27298e5", GitTreeState:"clean", BuildDate:"2019-03-25T15:53:57Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"windows/amd64"}
Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.0", GitCommit:"641856db18352033a0d96dbc99153fa3b27298e5", GitTreeState:"clean", BuildDate:"2019-03-25T15:45:25Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
You can use "--validate=false" in your command. For example:
kubectl create -f simple-crud-dpl.yaml --validate=false
You are using the wrong kubectl version.
Kubectl is compatible 1 version up and down as described in the official docs
The error is confusing but it simply means that your version 1.10 isn't sending all the required parameters to the 1.14 api.
I am on Windows 10 with Docker Client and Minikube both installed. I was getting the error below;
error: SchemaError(io.k8s.api.core.v1.Node): invalid object doesn't have additional properties
I resolved it by updating the version of kubectl.exe to that being used by minikube. Here are the steps:
Note: Minikube tends to use the latest version of Kubernetes so it will be advisable to grab the latest kubectl.
Download the matching version of kubectl.exe.
Navigate to your Docker path where your kubectl is located e.g.
C:\Program Files\Docker\Docker\resources\bin
Place your downloaded kubectl.exe there. If it asks you replace it, please do.
Now type refreshenv in Powershell.
Check the new version if it's what you have placed there; kubectl version.
Now you are good, retry whatever tasks you was doing.
I was getting below error while running kubectl explain pod on windows 10
error: SchemaError(io.k8s.api.core.v1.NodeCondition): invalid object doesn't have additional properties
I had both Minikube and Docker Desktop installed. Reason for this error, as mentioned in earlier answers as well, was mismatch between server (major 1 minor 15) and client version (major 1 minor 10). Client version was coming from Docker Desktop.
To fix I upgraded kubectl client version to v1.15.1 as described here
curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.15.1/bin/windows/amd64/kubectl.exe
Mac user !!! This is for those who installed docker desktop first. The error will show up when you use the apply command. The error comes for a version miss match as some people said here. I did not install kubectl using homebrew. Rather kubectl auto get install when you install docker desktop for mac.
To fix this what I have done is bellow:
Remove the kubectl executable file
rm /usr/local/bin/kubectl
Download kubectl:
curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/darwin/amd64/kubectl
Change the permission:
chmod +x ./kubectl
Move the executable file :
sudo mv ./kubectl /usr/local/bin/kubectl
That is it folks!
Just to show it worked here is the output:
kubectl apply -f ./deployment.yaml
deployment.apps/tomcat-deployment created
Make sure the yml file is correct. I downloaded a valid file from here to test :
https://github.com/LevelUpEducation/kubernetes-demo/tree/master/Introduction%20to%20Kubernetes/Your%20First%20k8s%20App
Was running into the same issue after installing kubectl on my Mac today. Uninstalling kubectl [via brew uninstall kubectl] and reinstalling [brew install kubectl] resolved the issue for me.
According to the kubectl docs,
You must use a kubectl version that is within one minor version difference of your cluster.
kubectl v1.10 client apparently makes requests to kubectl v1.14 server without some newly (in 4 minor versions) required parameters.
For brew users, reinstall kubernetes-cli. It's worth checking what installed the incompatible version. For brew users, check the command symlink ls -l $(which kubectl).
For me Docker installation was the problem. As Docker now comes with Kubernetes support, it installs kubectl along with its own installation. I had downloaded kubectl and minikube without knowing it, then my minikube was being used by Docker's kubectl installation.
Make sure that it is not also happening with you.
A second cause would be a deprecated apiVersion in your .yaml files.
I had a similar problem with error
error: SchemaError(io.k8s.api.storage.v1beta1.CSIDriverList): invalid object doesn't have additional properties
My issue was that my mac was using google's kubectl that was installed with the gcp tools. My path looks there first before going into /usr/local/bin/
Once I run kubectl from /usr/local/bin my problem went away.
In my case, kubectl is always using google's kubectl by gcloud tool, or there was most probably a conflict between Homebrew installed and Gcloud Installed kubectl. I uninstalled Homebrew kubectl and upgrade gcloud tool to the latest, which eventually upgrades the kubectl also in the process. It resolved my issue.
I don't think the problem is with imagePullPolicy, unless you don't have the image locally. The error is about autoscaling, which means it's not able to create replicas of the container.
Can you set replicas: 1 and give it a try?
On windows 10.0, uninstalling Docker helped me get away this problem. Doing with kubectl and minikube.
I know this has already been answered but I though I should post my response since the responses above were helpful but it took me a while to relate it to Azure Dev Ops.
I was getting this error when I was trying to deploy an app to a AKS cluster from Azure Devops. As mentioned above, one of the issues this error could appear is because of version mismatch which was the cause in my case. I fixed it by updating my AKS version into the kubectl advanced configuration section as shown in the figure below
Related
I have a GitHub Actions workflow that substitutes value in a deployment manifest. I use kubectl patch --local=true to update the image. This used to work flawlessly until now. Today the workflow started to fail with a Missing or incomplete configuration info error.
I am running kubectl with --local flag so the config should not be needed. Does anyone know what could be the reason why kubectl suddenly started requiring a config? I can't find any useful info in Kubernetes GitHub issues and hours of googling didn't help.
Output of the failed step in GitHub Actions workflow:
Run: kubectl patch --local=true -f authserver-deployment.yaml -p '{"spec":{"template":{"spec":{"containers":[{"name":"authserver","image":"test.azurecr.io/authserver:20201230-1712-d3a2ae4"}]}}}}' -o yaml > temp.yaml && mv temp.yaml authserver-deployment.yaml
error: Missing or incomplete configuration info. Please point to an existing, complete config file:
1. Via the command-line flag --kubeconfig
2. Via the KUBECONFIG environment variable
3. In your home directory as ~/.kube/config
To view or setup config directly use the 'config' command.
Error: Process completed with exit code 1.
Output of kubectl version:
Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.0",
GitCommit:"ffd68360997854d442e2ad2f40b099f5198b6471", GitTreeState:"clean",
BuildDate:"2020-11-18T13:35:49Z", GoVersion:"go1.15.0", Compiler:"gc",
Platform:"linux/amd64"}
As a workaround I installed kind (it does take longer for the job to finish, but at least it's working and it can be used for e2e tests later).
Added this step:
- name: Setup kind
run: kubectl version
uses: engineerd/setup-kind#v0.5.0
Also use --dry-run=client as an option for your kubectl command.
I do realize this is not the proper solution.
You still need to set the config to access kubernetes cluster. Even tho you are modifying the file locally, you are still executing kubectl command that has to be ran against the cluster. By default, kubectl looks for a file named config in the $HOME/.kube directory.
error: current-context is not set indicates that there is no current context set for the cluster and kubectl cannot be executed against a cluster. You can create a context for Service Account using this tutorial.
Exporting KUBERNETES_MASTER environment variable should do the trick:
$ export KUBERNETES_MASTER=localhost:8081 # 8081 port, just to ensure it works
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.8", GitCommit:"9f2892aab98fe339f3bd70e3c470144299398ace", GitTreeState:"clean", BuildDate
:"2020-08-13T16:12:48Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}
The connection to the server localhost:8081 was refused - did you specify the right host or port?
# Notice the port 8081 in the error message ^^^^^^
Now patch also should work as always:
$ kubectl patch --local=true -f testnode.yaml -p '{"metadata":{"managedFields":[]}}' # to get the file content use -o yaml
node/k8s-w1 patched
Alternatively, you can update kubectl to a later version. (v1.18.8 works fine even without the trick)
Explanation section:
The change is likely to be introduced by PR #86173 stop defaulting kubeconfig to http://localhost:8080
The change was reverted in Revert "stop defaulting kubeconfig to http://localhost:8080" #90243 for the later 18.x versions, see the issue kubectl --local requires a valid kubeconfig file #90074 for the details
I ended up using sed to replace the string with image
- name: Update manifests with new images
working-directory: test/cloud
run: |
sed -i "s~image:.*$~image: ${{ steps.image_tags.outputs.your_new_tag }}~g" your-deployment.yaml
Works like a charm now.
I am installing minikube again on my Windows machine (did before a couple years ago but hadn't used in over a year) and the installation of the most recent kubectl and minikube went well. That is up until I tried to start minikube with:
minikube start --vm-driver=virtualbox
Which gives the error:
C:\>minikube start --vm-driver=virtualbox
* minikube v1.6.2 on Microsoft Windows 10 Pro 10.0.18362 Build 18362
* Selecting 'virtualbox' driver from user configuration (alternates: [])
! Specified Kubernetes version 1.10.0 is less than the oldest supported version: v1.11.10
X Sorry, Kubernetes 1.10.0 is not supported by this release of minikube
Which doesn't make sense since my kubectl version --client gives back the version of v1.17.0:
C:\>kubectl version --client
Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.0", GitCommit:"70132b0f130acc0bed193d9ba59dd186f0e634cf", GitTreeState:"clean", BuildDate:"2019-12-07T21:20:10Z", GoVersion:"go1.13.4", Compiler:"gc", Platform:"windows/amd64"}
I did find that for some reason when I have the kubectl.exe that was downloaded to the correct kubectl folder in my program files(x86) (which the environment variable I had already was pointing to) it would say the version is v1.14.3. But then I copied the same file from that folder and just pasted it into the C Drive at its root and then it says the version is v1.17.0.
I am assuming that is just because it being at root is the same as adding it to the environment variables, but then that means something has an old v1.14.3 kubectl file, but there aren't any other kubectl files in there.
So basically, I am not sure if there is something that needs to be set in minikube (which from the documentation I haven't seen a reference to) but somehow minikube is detecting an old kubectl that I need to get rid of.
Since you already had the minikube installed before and update the installation, the best thing to do is execute minikube delete to clean up all previous configuration.
The minikube delete command can be used to delete your cluster. This command shuts down and deletes the Minikube Virtual Machine. No data or state is preserved.
After that execute minikube start --vm-driver=virtualbox and wait the cluster up.
References:
https://kubernetes.io/docs/setup/learning-environment/minikube/#deleting-a-cluster
I followed the instructions on https://kubernetes.io/docs/tasks/tools/install-kubectl/ on my Mac and installed Kubernetes CLI using brew.
brew install kubernetes-cli
kubectl and Minikube were installed already some time ago, so I was expecting an update. Now kubectl version and kubernetes cluster-info time out.
pa-demo jps$ kubectl version
Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.2", GitCommit:"bb9ffb1654d4a729bb4cec18ff088eacc153c239", GitTreeState:"clean", BuildDate:"2018-08-08T16:31:10Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"darwin/amd64"}
Unable to connect to the server: dial tcp 192.168.99.100:8443: i/o timeout
When I try to install kubernetes-cli again, I get:
Updating Homebrew...
==> Auto-updated Homebrew!
Updated 2 taps (homebrew/core, homebrew/cask).
==> New Formulae
topgrade
==> Updated Formulae
bison ✔ azure-cli bwfmetaedit erlang#20 ghostscript jfrog-cli-go ldc p11-kit smlnj youtube-dl
sphinx-doc ✔ babel crystal fauna-shell helmfile juju mkvtoolnix pyside tarsnap-gui
alexjs bat doctl fortio influxdb kore nginx re2c thors-serializer
Warning: kubernetes-cli 1.11.2 is already installed and up-to-date
To reinstall 1.11.2, run `brew reinstall kubernetes-cli`
You may have installed Minikube, but that doesn't necessarily mean it's actively running. You'd need to run minikube start to actually start the cluster on your machine. This also configures your kubeconfig file to point at the cluster it built.
i am trying to install a kops cluster on AWS and to that as a pre-requisite i installed kubectl as per these instructions provided,
https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl
but when i try to verify the installation, i am getting the below error.
ubuntu#ip-172-31-30-74:~$ kubectl version
Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.0", GitCommit:"6e937839ac04a38cac63e6a7a306c5d035fe7b0a", GitTreeState:"clean", BuildDate:"2017-09-28T22:57:57Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
The connection to the server localhost:8080 was refused - did you specify the right host or port?
i am not sure why! because i had set up cluster with similar way earlier and everything worked fine.
Now wanted to set up a new cluster, but kind of stuck in this.
Any help appreciated.
Two things :
If every instruction was followed properly, and still facing same issue. #VAS answer might help.
However in my case, i was trying to verify with kubectl as soon as i deployed a cluster. It is to be noted that based on the size of the master and worker nodes cluster takes some time to come up.
Once the cluster is up, kubectl was successfully able to communicate. As silly as it sounds, i waited out for 15 mins or so until my master was successfully running. Then everything worked fine.
The connection to the server localhost:8080 was refused - did you specify the right host or port?
This error usually means that your kubectl config is not correct and either it points to the wrong address or credentials are wrong.
If you have successfully created a cluster with kops, you just need to export its connections settings to kubectl config.
kops export --name=<your_cluster_name> --config=~/.kube/config
If you want to use a separate config file for this cluster, you can do it by setting the environment variable:
export KUBECONFIG=~/.kube/you_cluster_name.config
kops export kubecfg --name you_cluster_name --config=~$KUBECONFIG
You can also create a kubectl config for each team member using KOPS_STATE_STORE:
export KOPS_STATE_STORE=s3://<somes3bucket>
# NAME=<kubernetes.mydomain.com>
kops export kubecfg ${NAME}
In my particular case I forgot to configure kubectl after the installation which resulted in the exact same symptoms.
More specifically I forgot to create and populate the config file in $HOME/.kube directory. You can read about how to do this properly here, but this should suffice to make the error go away:
$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
I'm running kubectl create -f notRelevantToThisQuestion.yml
The response I get is:
Error from server (NotFound): the server could not find the requested
resource
Is there any way to determine which resource was requested that was not found?
kubectl get ns returns
NAME STATUS AGE default Active 243d
kube-public Active 243d kube-system Active 243d
This is not a cron job.
Client version 1.9
Server version 1.6
This is very similar to https://devops.stackexchange.com/questions/2956/how-do-i-get-kubernetes-to-work-when-i-get-an-error-the-server-could-not-find-t?rq=1 but my k8s cluster has been deployed correctly (everything's been working for almost a year, I'm adding a new pod now).
To solve this, downgrade the client or upgrade the server. In my case I've upgraded server (new minikube) but forget to upgrade client (kubectl) and end up with those versions.
$ kubectl version --short
Client Version: v1.9.0
Server Version: v1.14.1
When I'd upgraded client version (in this case to 1.14.2) then everything started to work again.
Instructions how to install (in your case upgrade) client are here https://kubernetes.io/docs/tasks/tools/install-kubectl
I have the same error when trying to do a CD with Jenkins and Kubernetes. In the pipeline I excute kubectl create -f app-deployment.yml -v=8 This image show more information about the error:
The cause of problem in versions:
From documentation
a client should be skewed no more than one minor version from the
master, but may lead the master by up to one minor version. For
example, a v1.3 master should work with v1.1, v1.2, and v1.3 nodes,
and should work with v1.2, v1.3, and v1.4 clients.
From http://words.yuvi.in/post/kubectl-rbac/
Running kubectl create -f notRelevantToThisQuestion.yml -v=8 will print all the HTTP traffic (requests and responses!) in an easy to read way. In this way, one can identify which resource is not available from the http responses.
apply these and then try
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
This solution is particularly for mac users.
Step 1:- Update kubernetes
brew upgrade kubernetes-cli
Step 2:- Overwrite it
brew link --overwrite kubernetes-cli
For Openshift, I was using old oc CLI version, after updating to latest oc CLI solved my issue
I stumbled upon this question when creating resource from Dashboard.
The resource was namespaced and I had no namespace selected. Selecting namespace fixed the server could not find the requested resource error.
In my case, I didn't enable kubernetes from docker desktop.
I enabled it which worked.