I was successfully able to connect to the kubernetes cluster and work with the services and pods. At one point this changed and everytime I try to connect to the cluster I get the following error:
PS C:\Users\xxx> kubectl get pods
Unable to connect to the server: error parsing output for access token command "C:\\Program Files (x86)\\Google\\Cloud SDK\\google-cloud-sdk\\bin\\gcloud.cmd config config-helper --format=json": yaml: line 4: could not find expected ':'
I am unsure of what the issue is. Google unfortunately doesn't yield any results for me either.
I have not changed any config files or anything. It was a matter of it working one second and not working the next.
Thanks.
It looks like the default auth plugin for GKE might be buggy on windows. kubectl is trying to run gcloud to get a token to authenticate to your cluster. If you run kubectl config view you can see the command it tried to run, and run it yourself to see if/why it fails.
As Alexandru said, a workaround is to use Google Application Default Credentials. Actually, gcloud container has built in support for doing this, which you can toggle by setting a property:
gcloud config set container/use_application_default_credentials true Try running this or set environment variable
%CLOUDSDK_CONTAINER_USE_APPLICATION_DEFAULT_CREDENTIALS% to true.
Referenced from here
The workaround for this issue being:
gcloud container clusters get-credentials <cluster-name> If you dont know your cluster name find it by gcloud container clusters list Finally, if those don't have issues, do gcloud auth application-default login and login with relative details
Related
I am currently playing around with AWS EKS
But I always get error: You must be logged in to the server (Unauthorized) when trying to run kubectl cluster-info command.
I have read a lot of AWS documentation and look at lots of similar issues who face the same problem. Unfortunately, none of them resolves my problem.
So, this is what I did
install all required packages
create a user to access aws-cli name crop-portal
create a role for EKS name crop-cluster
create EKS cluster via AWS console with the role crop-cluster namecrop-cluster(cluster and role have the same name)
run AWS configure for user crop-portal
run aws eks update-kubeconfig --name crop-cluster to update the kube config
run aws sts assume-role --role-arn crop-cluster-arn --role-session-name eks-access
copy accessKey, secreyKey and sessionToken into env variables AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY and AWS_SESSION_TOKEN accordingly
run aws sts get-caller-indentity and now the result says it used assume role already
{
"UserId": "AROAXWZGX5HOBZPVGAUKC:botocore-session-1572604810",
"Account": "529972849116",
"Arn": "arn:aws:sts::529972849116:assumed-role/crop-cluster/botocore-session-1572604810"
}
run kubectl cluster and always get error: You must be logged in to the server (Unauthorized)
when I run aws-iam-authenticator token -i crop-cluster, it gave me the token and
when I run aws-iam-authenticator verify -t token -i crop-portal, it also passed
&{ARN:arn:aws:sts::529972849116:assumed-role/crop-cluster/1572605554603576170 CanonicalARN:arn:aws:iam::529972849116:role/crop-cluster AccountID:529972849116 UserID:AROAXWZGX5HOBZPVGAUKC SessionName:1572605554603576170}
I don't know what is wrong or what I miss. I try so hard to get it works but I really don't know what to do after this.
Some people suggest creating a cluster with awscli instead of GUI. I tried both methods and none of them work. Either creating with awscli or GUI is the same for me.
Please someone helps :(
I will try to add some more information here and I hope it will be more helpful while setting up the access to the EKS cluster.
When we create the EKS cluster by any method via CloudFormation/CLI/EKSCTL the IAM role/user who created the cluster will automatically binded to the default kubernetes RBAC API group "system:masters" (https://kubernetes.io/docs/reference/access-authn-authz/rbac/#user-facing-roles) and in this way creator of the cluster will get the admin access to the cluster.
To verify the role or user for the EKS cluster we can search for the CreateCluster Api call on cloudtrail and it will tell us the creator of the cluster.
Now generally if we use role to create the cluster as you did (For example "crop-cluster"). We have to make sure that we are assuming this role before making any api calls using kubectl and the easiest way to do this is set this role in the kubeconfig file. And we can easily do this by running the below command from the terminal.
aws eks --region region-code update-kubeconfig --name cluster_name --role-arn crop-cluster-arn
Now if we will run the above command then it will set the role with -r flag in the kube config file so in that way we are telling the aws/aws-iam-authenticator that before making any api call it should first assume the role and in this way WE DON'T HAVE TO ASSUME THE ROLE MANUALLY via cli using "aws sts assume-role --role-arn crop-cluster-arn --role-session-name eks-access".
Once kubeconfig file is set properly make sure that CLI is configured properly wit h the IAM user credentials "crop-portal". And we can confirm this by running the "aws sts get-caller-identity" command and output should show us the user ARN in the "Arn" section like below.
$ aws sts get-caller-identity
{
"Account": "xxxxxxxxxxxxx",
"UserId": "xxxxxxxxxxxxxx",
"Arn": "arn:aws:iam::xxxxxxxxxxx:user/crop-portal"
}
Once that is done you should be directly able to make kubectl command without any issue.
Note: I have assumed that user "crop-portal" does have enogh permission to assume the role "crop-cluster"
Note:
For more details we can also refer to answer on this question Getting error "An error occurred (AccessDenied) when calling the AssumeRole operation: Access denied" after setting up EKS cluster
I have a running cluster on Google Cloud Kubernetes engine and I want to access that using kubectl from my local system.
I tried installing kubectl with gcloud but it didn't worked. Then I installed kubectl using apt-get. When I try to see the version of it using kubectl version it says
Unable to connect to server EOF. I also don't have file ~/.kube/config, which I am not sure why. Can someone please tell me what I am missing here? How can I connect to the already running cluster in GKE?
gcloud container clusters get-credentials ... will auth you against the cluster using your gcloud credentials.
If successful, the command adds appropriate configuration to ~/.kube/config such that you can kubectl.
When I run a simple command on my local shell with gcloud sdk.
$ kubectl get pod
I get such error:
Error from server (Forbidden): pods is forbidden: User "client" cannot list pods at the cluster scope: Unknown user "client"
The same command runs fine on GCP cloud shell, and the output of
$ gcloud auth list
is as expected:
Credentialed Accounts
ACTIVE ACCOUNT
* foo#bar.com
I also tried to create clusterrolebinding, but get similar error.
This happens when you disable Legacy Authorisation in the cluster settings, because the client certificate that you are using is a legacy authentication method. So it looks like what is happening is the client authentication succeeds but the authorisation fails, as expected. ("Unknown user" in the error message, confusingly, seems to mean the user is unknown to the authorisation system, not to the authentication system.)
You can either disable the use of the client certificate with
gcloud config unset container/use_client_certificate
and then regenerate your kubectl config with
gcloud container clusters get-credentials my-cluster
Or you can simply re-enable Legacy Authorisation in the cluster settings in the Google Cloud Console, or using the command:
gcloud container clusters update [CLUSTER_NAME] --enable-legacy-authorization
I understand this issue has now been resolved, but I would like to add some information about why this issue can occur, as it may be useful to anyone who comes across a similar issue.
Kubernetes Engine users can authenticate to the Kubernetes API using Google OAuth2 access tokens, which means that when users create a new cluster, Kubernetes Engine configures kubectl to authenticate the user to the cluster.
It's also possible to authenticate to the cluster using legacy methods which include using the cluster certificate and/or username and passwords. This is defined in the gcloud config.
The configuration of gcloud in, for example the Cloud Shell may be different from an installation of gcloud elsewhere, for example on a home workstation.
The:
Error from server (Forbidden): pods is forbidden: User "client" cannot
list pods at the cluster scope: Unknown user "client"
error suggests that gcloud config set container/use_client_certificate is set to True i.e. that gcloud is expecting a client cluster certificate to authenticate to the cluster (this is what the 'client' in the error message refers to).
As #Yanwei has discovered, unsetting container/use_client_certificate by issuing the following command in the glcoud config ends the need for a legacy certificate or credentials and prevents the error message:
gcloud config unset container/use_client_certificate
Issues such as this may be more likely if you are using an older version of gcloud on your home workstation or elsewhere.
There is some information on this here.
Found out there is some issue with gcloud config. This command solved it:
gcloud config unset container/use_client_certificate
In addition to setting
gcloud config unset container/use_client_certificate
Also make sure you do not have this env variable set to True
CLOUDSDK_CONTAINER_USE_CLIENT_CERTIFICATE
Which steps must one currently go through in order to authenticate against Google Container Engine/Kubernetes 1.4.5?
As I set up a third Google Cloud project today, I experienced that my previous GKE cluster setup flow no longer worked. My flow was the following:
gcloud auth login
gcloud config set compute/region europe-west1
gcloud config set compute/zone europe-west1-d
gcloud config set project myproject
gcloud container clusters get-credentials staging
# An example of a typical kubectl command to see that you've got the right cluster
kubectl get pods --all-namespaces
Whereas this used to work perfectly, I was now getting permission errors while trying to query the cluster, e.g. kubectl get pods would emit the following error message: the server does not allow access to the requested resource (get pods)
After googling back and forth, I realized that kubectl depends on something called Application Default Credentials. At some point I also noticed by chance that gcloud auth login emits the following:
WARNING: `gcloud auth login` no longer writes application default credentials.
If you need to use ADC, see:
gcloud auth application-default --help
So I realized eventually, that with the current gcloud/Kubernetes version I also need to call gcloud auth application-default in order to use the credentials of my current account rather than that of the previously activated project.
So, I am hoping someone can please clarify what is the actual authentication workflow for Google Container Engine/Kubernetes version 1.4.5??
You found out the right answer. kubectl's GCP authentication plugin only supports Application Default Credentials, which were recently decoupled from gcloud's standard credentials. So, in 1.4.5 you need to run gcloud auth application-default login to ensure that kubectl is using the credentials you expect.
We think that most users just expect to use the same credentials as gcloud, with ADC being useful for some service account scenarios where gcloud might not even be installed. So, there is a pull request to Kubernetes to add a "use gcloud credentials" option to the kubectl gcp authentication plugin. This should be available in kubectl 1.5.
I am trying to setup Kubernetes for the first time. I am following the Fedora Manual installation guide: http://kubernetes.io/v1.0/docs/getting-started-guides/fedora/fedora_manual_config.html
I am trying to get the kubernetes addons running , specifically the kube-ui. I created the service and replication controller like so:
kubectl create -f cluster/addons/kube-ui/kube-ui-rc.yaml --namespace=kube-system
kubectl create -f cluster/addons/kube-ui/kube-ui-svc.yaml --namespace=kube-system
When i run
kubectl get events --namespace=kube-system
I see errors such as this:
Failed to pull image "gcr.io/google_containers/pause:0.8.0": image pull failed for gcr.io/google_containers/pause:0.8.0, this may be because there are no credentials on this request. details: (Authentication is required.)
How am i supposed to tell kubernetes to authenticate? This isnt covered in the documentation. So how do i fix this?
This happened due to a recent outage to gce storage as a result of which all of us went through this error while pulling images from gcr (which uses gce storage on the backend).
Are you still seeing this error ?
as the message says, you need credentials. Are you using Google Container Engine? Then you need to run
gcloud config set project <your-project>
gcloud config set compute/zone <your-zone, like us-central1-f>
gcloud beta container clusters get-credentials --cluster <your-cluster-name>
then your GCE cluster will have the credentials