kubectl version on CodeBuild prints error...
[Container] 2019/08/26 04:07:32 Running command kubectl version
Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.0", GitCommit:"e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529", GitTreeState:"clean", BuildDate:"2019-06-19T16:40:16Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
error: You must be logged in to the server (the server has asked for the client to provide credentials)
error: You must be logged in to the server (the server has asked for the client to provide credentials)
I'm using Amazon EKS Cluster.
It seems some authentication setup missing...?
What I did:
Setup codebuild project (a new service role codebuild-hoge-service-role is created).
Added eks:DescribeCluster Policy to the role as inline policy because aws eks update-kubeconfig requires it.
Edit configmap/aws-auth to bind the role and RBAC by kubectl edit -n kube-system configmap/aws-auth on my local device, adding new config to mapRoles like:
mapRoles: |
- rolearn: .....
- rolearn: arn:aws:iam::999999999999:role/service-role/codebuild-hoge-service-role
¦ username: codebuild
¦ groups:
¦ - system:masters
That's all.
Not enough? Is there anything I missed?
Also I tried another approach to debug and it worked successfully..
Create a IAM User and IAM Role. He can switch to the role (assume role).
Edit configmap/aws-auth and Add config for the role. (same as failure process)
Switch role on local and execute kubectl version. It worked!!
buildspec.yml
version: 0.2
phases:
install:
runtime-versions:
docker: 18
commands:
- curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.15.0/bin/linux/amd64/kubectl
- chmod +x ./kubectl
- mv -f ./kubectl /usr/local/bin/kubectl
pre_build:
commands:
- aws eks update-kubeconfig --name mycluster
- kubectl version
build:
commands:
- kubectl get svc -A
I had the same issue. I just had to create another role with same Trusted Relationship and policy as the original one. BUT it worked.
The only thing I did different was not add the path /service-role/, so the ARN looked like: arn:aws:iam::123456789012:role/another-codebuild-role.
I was facing the same issue today. The answers provided here do work, however in essence what you need to do is to remove the /service-role string from the role's ARN that you use in aws-auth config map. There is actually no need to create a separate role (in your IAM console you can keep the role with /service-role path in its ARN, remove it only in aws-auth config map).
Please see the following article for details: AWS Knowledge Center Article
While running the helm init I was getting an error:
Error: error installing: the server could not find the requested resource (post deployments.extensions)
But I solved it by running :
helm init --client-only
But when I run:
helm upgrade --install --namespace demo demo-databases-ephemeral charts/databases-ephemeral --wait
I'm getting:
Error: serializer for text/html; charset=utf-8 doesn't exist
I found nothing convincing as a solution and I'm not able to proceed forward in the setup.
Any help would be appreciated.
Check if your ~/.kube/config exists and is properly set up. If not, run the following command:
sudo cp -i /etc/kubernetes/admin.config ~/.kube/config
Now check if kubectl is properly setup using:
kubectl version
This answer is specific to the issue you are getting. If this does not resolve the issue, please provide more error log.
Apparently, your kube-dns pod not able to find api server, so it returns text/html, rather then JSON
1) Check errors in dns container apart from Error: serializer for text/html; charset=utf-8 doesn't exist
kubectl logs <kube-dns-pod> -n kube-system kubedns
2) Update your dns pod config with following flags:
--kubecfg-file=~/.kube/config <-- path to your kube-config file
--kube-master-url=https://0.0.0.0:3000 <--address to your master node
I'm following that tutorial (https://www.baeldung.com/spring-boot-minikube)
I want to create Kubernetes deployment in yaml file (simple-crud-dpl.yaml):
apiVersion: apps/v1
kind: Deployment
metadata:
name: simple-crud
spec:
selector:
matchLabels:
app: simple-crud
replicas: 3
template:
metadata:
labels:
app: simple-crud
spec:
containers:
- name: simple-crud
image: simple-crud:latest
imagePullPolicy: Never
ports:
- containerPort: 8080
but when I run kubectl create -f simple-crud-dpl.yaml i got:
error: SchemaError(io.k8s.api.autoscaling.v2beta2.MetricTarget): invalid object doesn't have additional properties
I'm using the newest version of kubectl:
kubectl version
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.11", GitCommit:"637c7e288581ee40ab4ca210618a89a555b6e7e9", GitTreeState:"clean", BuildDate:"2018-11-26T14:38:32Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.0", GitCommit:"641856db18352033a0d96dbc99153fa3b27298e5", GitTreeState:"clean", BuildDate:"2019-03-25T15:45:25Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
I'm also using minikube locally as it's described in tutorial. Everything is working till deployment and service. I'm not able to do it.
After installing kubectl with brew you should run:
rm /usr/local/bin/kubectl
brew link --overwrite kubernetes-cli
And also optionally:
brew link --overwrite --dry-run kubernetes-cli.
I second #rennekon's answer. I found that I had docker running on my machine which also installs kubectl. That installation of kubectl causes this issue to show.
I took the following steps:
uninstalled it using brew uninstall kubectl
reinstalled it using brew install kubectl
(due to symlink creation failure) I forced brew to create symlinks using brew link --overwrite kubernetes-cli
I was then able to run my kubectl apply commands successfully.
I too had the same problem. In my Mac system kubectl is running from docker which is preinstalled when I install Docker. You can check this by using below command
ls -l $(which kubectl)
which returns as
/usr/local/bin/kubectl ->
/Applications/Docker.app/Contents/Resources/bin/kubectlcode.
Now we have to overwrite the symlink with kubectl which is installed using brew
rm /usr/local/bin/kubectl
brew link --overwrite kubernetes-cli
(optinal)
brew unlink kubernetes-cli && brew link kubernetes-cli
To Verify
ls -l $(which kubectl)
I encountered the same issue on minikube/ Windows 10 after installing Docker.
It was caused by the version mismatch of kubectl that was mentioned a couple of times already in this thread. Docker installs version 1.10 of kubectl.
You have a couple of options:
1) Make sure the path to your k8s bin is above the ones in docker
2) Replace the kubectl in 'c:\Program Files\Docker\Docker\resources\bin' with the correct one
Your client version is too old. In my env this version comes with Docker. I have to download new client from https://storage.googleapis.com/kubernetes-release/release/v1.14.0/bin/windows/amd64/kubectl.exe and now works fine:
kubectl version
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.0", GitCommit:"641856db18352033a0d96dbc99153fa3b27298e5", GitTreeState:"clean", BuildDate:"2019-03-25T15:53:57Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"windows/amd64"}
Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.0", GitCommit:"641856db18352033a0d96dbc99153fa3b27298e5", GitTreeState:"clean", BuildDate:"2019-03-25T15:45:25Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
You can use "--validate=false" in your command. For example:
kubectl create -f simple-crud-dpl.yaml --validate=false
You are using the wrong kubectl version.
Kubectl is compatible 1 version up and down as described in the official docs
The error is confusing but it simply means that your version 1.10 isn't sending all the required parameters to the 1.14 api.
I am on Windows 10 with Docker Client and Minikube both installed. I was getting the error below;
error: SchemaError(io.k8s.api.core.v1.Node): invalid object doesn't have additional properties
I resolved it by updating the version of kubectl.exe to that being used by minikube. Here are the steps:
Note: Minikube tends to use the latest version of Kubernetes so it will be advisable to grab the latest kubectl.
Download the matching version of kubectl.exe.
Navigate to your Docker path where your kubectl is located e.g.
C:\Program Files\Docker\Docker\resources\bin
Place your downloaded kubectl.exe there. If it asks you replace it, please do.
Now type refreshenv in Powershell.
Check the new version if it's what you have placed there; kubectl version.
Now you are good, retry whatever tasks you was doing.
I was getting below error while running kubectl explain pod on windows 10
error: SchemaError(io.k8s.api.core.v1.NodeCondition): invalid object doesn't have additional properties
I had both Minikube and Docker Desktop installed. Reason for this error, as mentioned in earlier answers as well, was mismatch between server (major 1 minor 15) and client version (major 1 minor 10). Client version was coming from Docker Desktop.
To fix I upgraded kubectl client version to v1.15.1 as described here
curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.15.1/bin/windows/amd64/kubectl.exe
Mac user !!! This is for those who installed docker desktop first. The error will show up when you use the apply command. The error comes for a version miss match as some people said here. I did not install kubectl using homebrew. Rather kubectl auto get install when you install docker desktop for mac.
To fix this what I have done is bellow:
Remove the kubectl executable file
rm /usr/local/bin/kubectl
Download kubectl:
curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/darwin/amd64/kubectl
Change the permission:
chmod +x ./kubectl
Move the executable file :
sudo mv ./kubectl /usr/local/bin/kubectl
That is it folks!
Just to show it worked here is the output:
kubectl apply -f ./deployment.yaml
deployment.apps/tomcat-deployment created
Make sure the yml file is correct. I downloaded a valid file from here to test :
https://github.com/LevelUpEducation/kubernetes-demo/tree/master/Introduction%20to%20Kubernetes/Your%20First%20k8s%20App
Was running into the same issue after installing kubectl on my Mac today. Uninstalling kubectl [via brew uninstall kubectl] and reinstalling [brew install kubectl] resolved the issue for me.
According to the kubectl docs,
You must use a kubectl version that is within one minor version difference of your cluster.
kubectl v1.10 client apparently makes requests to kubectl v1.14 server without some newly (in 4 minor versions) required parameters.
For brew users, reinstall kubernetes-cli. It's worth checking what installed the incompatible version. For brew users, check the command symlink ls -l $(which kubectl).
For me Docker installation was the problem. As Docker now comes with Kubernetes support, it installs kubectl along with its own installation. I had downloaded kubectl and minikube without knowing it, then my minikube was being used by Docker's kubectl installation.
Make sure that it is not also happening with you.
A second cause would be a deprecated apiVersion in your .yaml files.
I had a similar problem with error
error: SchemaError(io.k8s.api.storage.v1beta1.CSIDriverList): invalid object doesn't have additional properties
My issue was that my mac was using google's kubectl that was installed with the gcp tools. My path looks there first before going into /usr/local/bin/
Once I run kubectl from /usr/local/bin my problem went away.
In my case, kubectl is always using google's kubectl by gcloud tool, or there was most probably a conflict between Homebrew installed and Gcloud Installed kubectl. I uninstalled Homebrew kubectl and upgrade gcloud tool to the latest, which eventually upgrades the kubectl also in the process. It resolved my issue.
I don't think the problem is with imagePullPolicy, unless you don't have the image locally. The error is about autoscaling, which means it's not able to create replicas of the container.
Can you set replicas: 1 and give it a try?
On windows 10.0, uninstalling Docker helped me get away this problem. Doing with kubectl and minikube.
I know this has already been answered but I though I should post my response since the responses above were helpful but it took me a while to relate it to Azure Dev Ops.
I was getting this error when I was trying to deploy an app to a AKS cluster from Azure Devops. As mentioned above, one of the issues this error could appear is because of version mismatch which was the cause in my case. I fixed it by updating my AKS version into the kubectl advanced configuration section as shown in the figure below
i am trying to install a kops cluster on AWS and to that as a pre-requisite i installed kubectl as per these instructions provided,
https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl
but when i try to verify the installation, i am getting the below error.
ubuntu#ip-172-31-30-74:~$ kubectl version
Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.0", GitCommit:"6e937839ac04a38cac63e6a7a306c5d035fe7b0a", GitTreeState:"clean", BuildDate:"2017-09-28T22:57:57Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
The connection to the server localhost:8080 was refused - did you specify the right host or port?
i am not sure why! because i had set up cluster with similar way earlier and everything worked fine.
Now wanted to set up a new cluster, but kind of stuck in this.
Any help appreciated.
Two things :
If every instruction was followed properly, and still facing same issue. #VAS answer might help.
However in my case, i was trying to verify with kubectl as soon as i deployed a cluster. It is to be noted that based on the size of the master and worker nodes cluster takes some time to come up.
Once the cluster is up, kubectl was successfully able to communicate. As silly as it sounds, i waited out for 15 mins or so until my master was successfully running. Then everything worked fine.
The connection to the server localhost:8080 was refused - did you specify the right host or port?
This error usually means that your kubectl config is not correct and either it points to the wrong address or credentials are wrong.
If you have successfully created a cluster with kops, you just need to export its connections settings to kubectl config.
kops export --name=<your_cluster_name> --config=~/.kube/config
If you want to use a separate config file for this cluster, you can do it by setting the environment variable:
export KUBECONFIG=~/.kube/you_cluster_name.config
kops export kubecfg --name you_cluster_name --config=~$KUBECONFIG
You can also create a kubectl config for each team member using KOPS_STATE_STORE:
export KOPS_STATE_STORE=s3://<somes3bucket>
# NAME=<kubernetes.mydomain.com>
kops export kubecfg ${NAME}
In my particular case I forgot to configure kubectl after the installation which resulted in the exact same symptoms.
More specifically I forgot to create and populate the config file in $HOME/.kube directory. You can read about how to do this properly here, but this should suffice to make the error go away:
$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
I'm running kubectl create -f notRelevantToThisQuestion.yml
The response I get is:
Error from server (NotFound): the server could not find the requested
resource
Is there any way to determine which resource was requested that was not found?
kubectl get ns returns
NAME STATUS AGE default Active 243d
kube-public Active 243d kube-system Active 243d
This is not a cron job.
Client version 1.9
Server version 1.6
This is very similar to https://devops.stackexchange.com/questions/2956/how-do-i-get-kubernetes-to-work-when-i-get-an-error-the-server-could-not-find-t?rq=1 but my k8s cluster has been deployed correctly (everything's been working for almost a year, I'm adding a new pod now).
To solve this, downgrade the client or upgrade the server. In my case I've upgraded server (new minikube) but forget to upgrade client (kubectl) and end up with those versions.
$ kubectl version --short
Client Version: v1.9.0
Server Version: v1.14.1
When I'd upgraded client version (in this case to 1.14.2) then everything started to work again.
Instructions how to install (in your case upgrade) client are here https://kubernetes.io/docs/tasks/tools/install-kubectl
I have the same error when trying to do a CD with Jenkins and Kubernetes. In the pipeline I excute kubectl create -f app-deployment.yml -v=8 This image show more information about the error:
The cause of problem in versions:
From documentation
a client should be skewed no more than one minor version from the
master, but may lead the master by up to one minor version. For
example, a v1.3 master should work with v1.1, v1.2, and v1.3 nodes,
and should work with v1.2, v1.3, and v1.4 clients.
From http://words.yuvi.in/post/kubectl-rbac/
Running kubectl create -f notRelevantToThisQuestion.yml -v=8 will print all the HTTP traffic (requests and responses!) in an easy to read way. In this way, one can identify which resource is not available from the http responses.
apply these and then try
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
This solution is particularly for mac users.
Step 1:- Update kubernetes
brew upgrade kubernetes-cli
Step 2:- Overwrite it
brew link --overwrite kubernetes-cli
For Openshift, I was using old oc CLI version, after updating to latest oc CLI solved my issue
I stumbled upon this question when creating resource from Dashboard.
The resource was namespaced and I had no namespace selected. Selecting namespace fixed the server could not find the requested resource error.
In my case, I didn't enable kubernetes from docker desktop.
I enabled it which worked.