When I am using kubectl get namespace command in my Kubernetes master node, I am getting proper output. And also I configured kubectl in my local machine. When I am running the same command from local machine configured with kubectl, I am getting error like the following,
Error from server (Forbidden): namespaces is forbidden: User "system:node:mildevkub020" cannot list resource "namespaces" in API group "" at the cluster scope
I copied the configuration file kubelet.conf from cluster and copied into .kube/config. And also installed the kubectl. This is the process what did till now.
Result of kubectl config view is like the following,
How can I resolve this issue?
Kubespray by default saves cluster admin kubeconfig file as inventory/mycluster/artifacts/admin.conf. Read more here: https://github.com/kubernetes-sigs/kubespray/blob/master/docs/getting-started.md#accessing-kubernetes-api
Related
Hi i have a server configured with kubernetes (without using minikube), i can execute kubectl commands without problems, like kubectl get all, kubectl delete pod, kubectl delete apply ...
I would want to know how to allow another user from my server to execute kubectl commands, because if i change to another user and i try to execute kubectl get all -s localhost:8443 i get:
Error from server (BadRequest): the server rejected our request for an unknown reason
I have read the Kubernetes Authorization Documentation, but im not sure if it is what im looking for.
This is happening because there is no kubeconfig file for the user.You need to have the same kubeconfig file for the other user either in the default location $HOME/.kube/config or in any location pointed by KUBECONFIG environment variable.
You can copy the existing kubeconfig file for the working user to the above location for the non working user.
I was successfully able to connect to the kubernetes cluster and work with the services and pods. At one point this changed and everytime I try to connect to the cluster I get the following error:
PS C:\Users\xxx> kubectl get pods
Unable to connect to the server: error parsing output for access token command "C:\\Program Files (x86)\\Google\\Cloud SDK\\google-cloud-sdk\\bin\\gcloud.cmd config config-helper --format=json": yaml: line 4: could not find expected ':'
I am unsure of what the issue is. Google unfortunately doesn't yield any results for me either.
I have not changed any config files or anything. It was a matter of it working one second and not working the next.
Thanks.
It looks like the default auth plugin for GKE might be buggy on windows. kubectl is trying to run gcloud to get a token to authenticate to your cluster. If you run kubectl config view you can see the command it tried to run, and run it yourself to see if/why it fails.
As Alexandru said, a workaround is to use Google Application Default Credentials. Actually, gcloud container has built in support for doing this, which you can toggle by setting a property:
gcloud config set container/use_application_default_credentials true Try running this or set environment variable
%CLOUDSDK_CONTAINER_USE_APPLICATION_DEFAULT_CREDENTIALS% to true.
Referenced from here
The workaround for this issue being:
gcloud container clusters get-credentials <cluster-name> If you dont know your cluster name find it by gcloud container clusters list Finally, if those don't have issues, do gcloud auth application-default login and login with relative details
I have a running cluster on Google Cloud Kubernetes engine and I want to access that using kubectl from my local system.
I tried installing kubectl with gcloud but it didn't worked. Then I installed kubectl using apt-get. When I try to see the version of it using kubectl version it says
Unable to connect to server EOF. I also don't have file ~/.kube/config, which I am not sure why. Can someone please tell me what I am missing here? How can I connect to the already running cluster in GKE?
gcloud container clusters get-credentials ... will auth you against the cluster using your gcloud credentials.
If successful, the command adds appropriate configuration to ~/.kube/config such that you can kubectl.
I created a cluster using kops. It worked fine and the cluster is healthy. I can see my nodes using kubectl and have created some deployments and services. I tried adding a node using "kops edit ig nodes" and got an error "cluster not found". Now I get that error for all kops commands:
kops validate cluster
Using cluster from kubectl context: <clustername>
cluster "<clustername>" not found
So my question is: where does kops look for clusters and how do I configure it to see my cluster.
My KOPS_STATE_STORE environment variable got messed up. I corrected it to be the correct s3 bucket and everything is fine.
export KOPS_STATE_STORE=s3://correctbucketname
Kubectl and Kops access the configuration file from the following the location.
When the cluster is created.The configuration will be saved into a users
$HOME/.kube/config
I have attached the link for further insight for instance, If you have another config file you can EXPORT it. kube-config
I am trying to setup Kubernetes for the first time. I am following the Fedora Manual installation guide: http://kubernetes.io/v1.0/docs/getting-started-guides/fedora/fedora_manual_config.html
I am trying to get the kubernetes addons running , specifically the kube-ui. I created the service and replication controller like so:
kubectl create -f cluster/addons/kube-ui/kube-ui-rc.yaml --namespace=kube-system
kubectl create -f cluster/addons/kube-ui/kube-ui-svc.yaml --namespace=kube-system
When i run
kubectl get events --namespace=kube-system
I see errors such as this:
Failed to pull image "gcr.io/google_containers/pause:0.8.0": image pull failed for gcr.io/google_containers/pause:0.8.0, this may be because there are no credentials on this request. details: (Authentication is required.)
How am i supposed to tell kubernetes to authenticate? This isnt covered in the documentation. So how do i fix this?
This happened due to a recent outage to gce storage as a result of which all of us went through this error while pulling images from gcr (which uses gce storage on the backend).
Are you still seeing this error ?
as the message says, you need credentials. Are you using Google Container Engine? Then you need to run
gcloud config set project <your-project>
gcloud config set compute/zone <your-zone, like us-central1-f>
gcloud beta container clusters get-credentials --cluster <your-cluster-name>
then your GCE cluster will have the credentials