Accessing Kubernetes API using username and password - kubernetes

I'm am currently configuring Heketi Server (Deployed on K8S clusterA) to interact with my Glusterfs cluster that is deployed as a DaemonSet on another K8S cluster ClusterB.
One of the configurations required by Heketi to connect to GlusterFS K8S cluster are :
"kubeexec": {
"host" :"https://<URL-OF-CLUSTER-WITH-GLUSTERFS>:6443",
"cert" : "<CERTIFICATE-OF-CLUSTER-WITH-GLUSTERFS>",
"insecure": false,
"user": "WHERE_DO_I_GET_THIS_FROM",
"password": "<WHERE_DO_I_GET_THIS_FROM>",
"namespace": "default",
"backup_lvm_metadata": false
},
As you can see, it requires a user and password. I have no idea where to get that from.
One thing that comes to mind is creating a service account on ClusterB and using the token to authenticate but Heketi does not seem to be taking that as an authentication mechanism.
The cert is something that I got from /usr/local/share/ca-certificates/kube-ca.crt but I have no idea where to get the user/password from. Any idea what could be done?
If I do a kubectl config view I only see certificates for the admin user of my cluster.

That could only mean one thing: basic HTTP auth.
You can specify a username/password in a file when you start the kube-apiserver with the --basic-auth-file=SOMEFILE option.

Related

Always getting error: You must be logged in to the server (Unauthorized) EKS

I am currently playing around with AWS EKS
But I always get error: You must be logged in to the server (Unauthorized) when trying to run kubectl cluster-info command.
I have read a lot of AWS documentation and look at lots of similar issues who face the same problem. Unfortunately, none of them resolves my problem.
So, this is what I did
install all required packages
create a user to access aws-cli name crop-portal
create a role for EKS name crop-cluster
create EKS cluster via AWS console with the role crop-cluster namecrop-cluster(cluster and role have the same name)
run AWS configure for user crop-portal
run aws eks update-kubeconfig --name crop-cluster to update the kube config
run aws sts assume-role --role-arn crop-cluster-arn --role-session-name eks-access
copy accessKey, secreyKey and sessionToken into env variables AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY and AWS_SESSION_TOKEN accordingly
run aws sts get-caller-indentity and now the result says it used assume role already
{
"UserId": "AROAXWZGX5HOBZPVGAUKC:botocore-session-1572604810",
"Account": "529972849116",
"Arn": "arn:aws:sts::529972849116:assumed-role/crop-cluster/botocore-session-1572604810"
}
run kubectl cluster and always get error: You must be logged in to the server (Unauthorized)
when I run aws-iam-authenticator token -i crop-cluster, it gave me the token and
when I run aws-iam-authenticator verify -t token -i crop-portal, it also passed
&{ARN:arn:aws:sts::529972849116:assumed-role/crop-cluster/1572605554603576170 CanonicalARN:arn:aws:iam::529972849116:role/crop-cluster AccountID:529972849116 UserID:AROAXWZGX5HOBZPVGAUKC SessionName:1572605554603576170}
I don't know what is wrong or what I miss. I try so hard to get it works but I really don't know what to do after this.
Some people suggest creating a cluster with awscli instead of GUI. I tried both methods and none of them work. Either creating with awscli or GUI is the same for me.
Please someone helps :(
I will try to add some more information here and I hope it will be more helpful while setting up the access to the EKS cluster.
When we create the EKS cluster by any method via CloudFormation/CLI/EKSCTL the IAM role/user who created the cluster will automatically binded to the default kubernetes RBAC API group "system:masters" (https://kubernetes.io/docs/reference/access-authn-authz/rbac/#user-facing-roles) and in this way creator of the cluster will get the admin access to the cluster.
To verify the role or user for the EKS cluster we can search for the CreateCluster Api call on cloudtrail and it will tell us the creator of the cluster.
Now generally if we use role to create the cluster as you did (For example "crop-cluster"). We have to make sure that we are assuming this role before making any api calls using kubectl and the easiest way to do this is set this role in the kubeconfig file. And we can easily do this by running the below command from the terminal.
aws eks --region region-code update-kubeconfig --name cluster_name --role-arn crop-cluster-arn
Now if we will run the above command then it will set the role with -r flag in the kube config file so in that way we are telling the aws/aws-iam-authenticator that before making any api call it should first assume the role and in this way WE DON'T HAVE TO ASSUME THE ROLE MANUALLY via cli using "aws sts assume-role --role-arn crop-cluster-arn --role-session-name eks-access".
Once kubeconfig file is set properly make sure that CLI is configured properly wit h the IAM user credentials "crop-portal". And we can confirm this by running the "aws sts get-caller-identity" command and output should show us the user ARN in the "Arn" section like below.
$ aws sts get-caller-identity
{
"Account": "xxxxxxxxxxxxx",
"UserId": "xxxxxxxxxxxxxx",
"Arn": "arn:aws:iam::xxxxxxxxxxx:user/crop-portal"
}
Once that is done you should be directly able to make kubectl command without any issue.
Note: I have assumed that user "crop-portal" does have enogh permission to assume the role "crop-cluster"
Note:
For more details we can also refer to answer on this question Getting error "An error occurred (AccessDenied) when calling the AssumeRole operation: Access denied" after setting up EKS cluster

Terraform Kubernetes provider with EKS fails on configmap

I've followed the instructions to create an EKS cluster in AWS using Terraform.
https://www.terraform.io/docs/providers/aws/guides/eks-getting-started.html
I've also copied the output for connecting to the cluster to ~/.kube/config-eks. I've verified this successfully works as I've been able to connect to the cluster and manually deploy containers. However, now i'm trying to use the Terraform Kubernetes provider to connect to the cluster but cannot seem to be able to configure the provider properly.
I've configured the provider to use my kubectl configuration but when attempting to push a simple configmap, i get an error stating the following:
configmaps is forbidden: User "system:anonymous" cannot create configmaps in the namespace "kube-system"
I know that the provider is picking up part of the configuration but I cannot seem to get it to authenticate. I suspect this is because EKS uses heptio for authentication and i'm not sure if the K8s Go client used by Terraform can support heptio. However, given that Terraform released their AWS EKS support when EKS went GA, I'd doubt that they wouldn't also update their Terraform provider to work with it.
Is it possible to even do this now? Are there alternatives?
Exec auth was added here: https://github.com/kubernetes/client-go/commit/19c591bac28a94ca793a2f18a0cf0f2e800fad04
This is what is utilized for custom authentication plugins and was published Feb 7th.
Right now, Terraform doesn't support the new exec-based authentication provider, but there is an issue open with a workaround: https://github.com/terraform-providers/terraform-provider-kubernetes/issues/161
That said, if I get some free time I will work on a PR.

How do I authenticate with Kubernetes kubectl using a username and password?

I've got a username and password, how do I authenticate kubectl with them?
Which command do I run?
I've read through: https://kubernetes.io/docs/reference/access-authn-authz/authorization/ and https://kubernetes.io/docs/reference/access-authn-authz/authentication/ though can not find any relevant information in there for this case.
kubectl config set-credentials cluster-admin --username=admin --password=uXFGweU9l35qcif
https://kubernetes-v1-4.github.io/docs/user-guide/kubectl/kubectl_config_set-credentials/
The above does not seem to work:
kubectl get pods
Error from server (Forbidden): pods is forbidden: User "client" cannot list pods in the namespace "default": Unknown user "client"
Kubernetes provides a number of different authentication mechanisms. Providing a username and password directly to the cluster (as opposed to using an OIDC provider) would indicate that you're using Basic authentication, which hasn't been the default option for a number of releases.
The syntax you've listed appears right, assuming that the cluster supports basic authentication.
The error you're seeing is similar to the one here which may suggest that the cluster you're using doesn't currently support the authentication method you're using.
Additional information about what Kubernetes distribution and version you're using would make it easier to provide a better answer, as there is a lot of variety in how k8s handles authentication.
You should have a group set for the authenticating user.
Example:
password1,user1,userid1,system:masters
password2,user2,userid2
Reference:
"Use a credential with the system:masters group, which is bound to the cluster-admin super-user role by the default bindings."
https://kubernetes.io/docs/reference/access-authn-authz/rbac/

what does Unknown user "client" mean?

When I run a simple command on my local shell with gcloud sdk.
$ kubectl get pod
I get such error:
Error from server (Forbidden): pods is forbidden: User "client" cannot list pods at the cluster scope: Unknown user "client"
The same command runs fine on GCP cloud shell, and the output of
$ gcloud auth list
is as expected:
Credentialed Accounts
ACTIVE ACCOUNT
* foo#bar.com
I also tried to create clusterrolebinding, but get similar error.
This happens when you disable Legacy Authorisation in the cluster settings, because the client certificate that you are using is a legacy authentication method. So it looks like what is happening is the client authentication succeeds but the authorisation fails, as expected. ("Unknown user" in the error message, confusingly, seems to mean the user is unknown to the authorisation system, not to the authentication system.)
You can either disable the use of the client certificate with
gcloud config unset container/use_client_certificate
and then regenerate your kubectl config with
gcloud container clusters get-credentials my-cluster
Or you can simply re-enable Legacy Authorisation in the cluster settings in the Google Cloud Console, or using the command:
gcloud container clusters update [CLUSTER_NAME] --enable-legacy-authorization
I understand this issue has now been resolved, but I would like to add some information about why this issue can occur, as it may be useful to anyone who comes across a similar issue.
Kubernetes Engine users can authenticate to the Kubernetes API using Google OAuth2 access tokens, which means that when users create a new cluster, Kubernetes Engine configures kubectl to authenticate the user to the cluster.
It's also possible to authenticate to the cluster using legacy methods which include using the cluster certificate and/or username and passwords. This is defined in the gcloud config.
The configuration of gcloud in, for example the Cloud Shell may be different from an installation of gcloud elsewhere, for example on a home workstation.
The:
Error from server (Forbidden): pods is forbidden: User "client" cannot
list pods at the cluster scope: Unknown user "client"
error suggests that gcloud config set container/use_client_certificate is set to True i.e. that gcloud is expecting a client cluster certificate to authenticate to the cluster (this is what the 'client' in the error message refers to).
As #Yanwei has discovered, unsetting container/use_client_certificate by issuing the following command in the glcoud config ends the need for a legacy certificate or credentials and prevents the error message:
gcloud config unset container/use_client_certificate
Issues such as this may be more likely if you are using an older version of gcloud on your home workstation or elsewhere.
There is some information on this here.
Found out there is some issue with gcloud config. This command solved it:
gcloud config unset container/use_client_certificate
In addition to setting
gcloud config unset container/use_client_certificate
Also make sure you do not have this env variable set to True
CLOUDSDK_CONTAINER_USE_CLIENT_CERTIFICATE

User "system:anonymous" cannot get path "/"

I just setup a kubenetes cluster base on this link https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/#multi-platform
I check with kubectl get nodes, then the master node is Ready, but when I access to the link https://k8s-master-ip:6443/
it show the error: User "system:anonymous" cannot get path "/".
What is the trick I am missing ?
Hope you see something like this:
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "forbidden: User \"system:anonymous\" cannot get path \"/\"",
"reason": "Forbidden",
"details": {
},
"code": 403
}
This is good as not everyone should be able to access the cluster, if you want to see the services run "kubectl proxy", this should enable access to the services from the outside world.
C:\dev1> kubectl proxy
Starting to serve on 127.0.0.1:8001
And when you hit 127.0.0.1:8001 you should see the list of services.
The latest kubernetes deployment tools enable RBAC on the cluster. Jenkins is relegated to the catch-all user system:anonymous when it accesses https://192.168.70.94:6443/api/v1/.... This user has almost no privileges on kube-apiserver.
The bottom-line is, Jenkins needs to authenticate with kube-apiserver - either with a bearer token or a client cert that's signed by the k8s cluster's CA key.
Method 1. This is preferred if Jenkins is hosted in the k8s cluster:
Create a ServiceAccount in k8s for the plugin
Create an RBAC profile (ie. Role/RoleBinding or ClusterRole/ClusterRoleBinding) that's tied to the ServiceAccount
Config the plugin to use the ServiceAccount's token when accessing the URL https://192.168.70.94:6443/api/v1/...
Method 2. If Jenkins is hosted outside the k8s cluster, the steps above can still be used. The alternative is to:
Create a client cert that's tied to the k8s cluster's CA. You have to find where the CA key is kept and use it to generate a client cert.
Create an RBAC profile (ie. Role/RoleBinding or ClusterRole/ClusterRoleBinding) that's tied to the client cert
Config the plugin to use the client cert when accessing the URL https://192.168.70.94:6443/api/v1/...
Both methods work in any situation. I believe Method 1 will be simpler for you because you don't have to mess around with the CA key.
By default, your clusterrolebinding has system:anonymous set which blocks the cluster access.
Execute the following command, it will set a clusterrole as cluster-admin which will give you the required access.
kubectl create clusterrolebinding cluster-system-anonymous --clusterrole=cluster-admin --user=system:anonymous