How to configure kubectl in kubernetes cluster - kubernetes

I have provisioned a kuberenetes cluster using this saltstack repo:
https://github.com/valentin2105/Kubernetes-Saltstack
Now, I am not able to configure my kubectl CLI to access the cluster.
Is there a way to reset the credentials?
Is there a way to get configure properly the .kube/config with the right context, user, credentials and cluster name retrieving the info from the servers?
I am new to kubernetes, so maybe I am missing something here.

To be able to set your cluster you can do as follow:
kubectl config set-cluster k8s-cluster --server=${CLUSTER} [--insecure-skip-tls-verify=true]
--server=${CLUSTER} where ${CLUSTER} is your cluster adress
--insecure-skip-tls-verify=true is used if you are using http over https
Then you need to set your context ( depending on your kubernetes configuration
kubectl config set-context k8s-context --cluster=k8s-cluster --namespace=${NS}
--namespace=${NS} to specify the default namespace ( which skips the -n while typing kubectl commands for that namespace )
If you are using a RBAC, you might need to specify your user and pass your connection token or your login password:
For this advanced usage you can see the docs https://kubernetes.io/docs/reference/kubectl/cheatsheet/#kubectl-context-and-configuration
Now and finally to use your context you only have to:
kubectl config use-context ${USER}

Related

What does it mean "kubectl config set-cluster" what does it actually does?

While using this command for RBAC "kubectl config set-cluster test --server=https://127.0.0.1:52807" the IP here is from the kind-cluster that I am running after which I use "kubectl config set-context test --cluster=test" followed by required credentials & switch to the context by "kubectl config use-context test" and I am in the test context but with the first command I am configuring the config file I got that but m I making a cluster within a cluster what you guys understand please help me clear my doubt what is it actually doing?
kubectl config set-cluster sets a cluster entry in your kubeconfig file (usually found in $HOME/.kube/config). The kubeconfig file defines how your kubectl is configured.
The cluster entry defines where kubectl can find the kubernetes cluster to talk to. You can have multiple clusters defined in your kubeconfig file.
kubectl config set-context sets a context element, which is used to combine a cluster, namespace and user into a single element so that kubectl has everything it needs to communicate with the cluster. You can have multiple contexts, for example one per kubernetes cluster that you're managing.
kubectl config use-context sets your current context to be used in kubectl.
So to walk through your commands:
kubectl config set-cluster test --server=https://127.0.0.1:52807 creates a new entry in kubeconfig under the clusters section with a cluster called test pointing towards https://127.0.0.1:52807
kubectl config set-context test --cluster=test creates a new context in kubeconfig called test and tells that context to point to a cluster called test
kubectl config use-context test changes the the current context in kubeconfig to a context called test (which you just created).
More docs on kubectl config and kubeconfig:
https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/
https://kubernetes.io/docs/reference/kubectl/cheatsheet/#kubectl-context-and-configuration

Using GKE service account credentials with kubectl

I am trying to invoke kubectl from within my CI system. I wish to use a google cloud service account for authentication. I have a secret management system in place that injects secrets into my CI system.
However, my CI system does not have gcloud installed, and I do not wish to install that. It only contains kubectl. Is there any way that I can use a credentials.json file containing a gcloud service account (not a kubernetes service account) directly with kubectl?
The easiest way to skip the gcloud CLI is to probably use the --token option. You can get a token with RBAC by creating a service account and tying it to a ClusterRole or Role with either a ClusterRoleBinding or RoleBinding.
Then from the command line:
$ kubectl --token <token-from-your-service-account> get pods
You still will need a context in your ~/.kube/config:
- context:
cluster: kubernetes
name: kubernetes-token
Otherwise, you will have to use:
$ kubectl --insecure-skip-tls-verify --token <token-from-your-service-account> -s https://<address-of-your-kube-api-server>:6443 get pods
Note that if you don't want the token to show up on the logs you can do something like this:
$ kubectl --token $(cat /path/to/a/file/where/the/token/is/stored) get pods
Also, note that this doesn't prevent you from someone running ps -Af on your box and grabbing the token from there, for the lifetime of the kubectl process (It's a good idea to rotate the tokens)
Edit:
You can use the --token-auth-file=/path/to/a/file/where/the/token/is/stored with kubectl to avoid passing it through $(cat /path/to/a/file/where/the/token/is/stored)

Login to GKE via service account with token

I am trying to access my Kubernetes cluster on google cloud with the service account, but I am not able to make this works. I have a running system with some pods and ingress. I want to be able to update images of deployments.
I would like to use something like this (remotely):
kubectl config set-cluster cluster --server="<IP>" --insecure-skip-tls-verify=true
kubectl config set-credentials foo --token="<TOKEN>"
kubectl config set-context my-context --cluster=cluster --user=foo --namespace=default
kubectl config use-context cluster
kubectl set image deployment/my-deployment boo=eu.gcr.io/project-123456/image:v1
So I created the service account and then get the secret token:
kubectl create serviceaccount foo
kubectl get secret foo-token-gqvgn -o yaml
But, when I try to update the image in any deployment, I receive:
error: You must be logged in to the server (Unauthorized)
IP address for API I use the address, which is shown in GKE administration as cluster endpoint IP.
Any suggestions? Thanks.
I have tried to recreate your problem.
Steps I have followed
kubectl create serviceaccount foo
kubectl get secret foo-token-* -o yaml
Then, I have tried to do what you have done
What I have used as token is base64 decoded Token.
Then I tried this:
$ kubectl get pods
Error from server (Forbidden): pods is forbidden: User "system:serviceaccount:default:foo" cannot list pods in the namespace "default": Unknown user "system:serviceaccount:default:foo"
This gave me error as expected. Because, I need to grant permission to this ServiceAccount.
How can I grant permission to this ServiceAccount? I need to create ClusterRole & ClusterRoleBinding with necessary permission.
Read more to learn more role-based-access-control
I can do another thing
$ kubectl config set-credentials foo --username="admin" --password="$PASSWORD"
This will grant you admin authorization.
You need to provide cluster credential.
Username: admin
Password: -----
You will get this info in GKE -> Kubernetes Engine -> {cluster} -> Show credential

How to deploy an application in GKE from a public CI server

I'm trying to deploy an application in a GKE 1.6.2 cluster running ContainerOS but the instructions on the website / k8s are not accurate anymore.
The error that I'm getting is:
Error from server (Forbidden): User "circleci#gophers-slack-bot.iam.gserviceaccount.com"
cannot get deployments.extensions in the namespace "gopher-slack-bot".:
"No policy matched.\nRequired \"container.deployments.get\" permission."
(get deployments.extensions gopher-slack-bot)
The repository for the application is available here available here.
Thank you.
I had a few breaking changes in the past with using the gcloud tool to authenticate kubectl to a cluster, so I ended up figuring out how to auth kubectl to a specific namespace independent of GKE. Here's what works for me:
On CircleCI:
setup_kubectl() {
echo "$KUBE_CA_PEM" | base64 --decode > kube_ca.pem
kubectl config set-cluster default-cluster --server=$KUBE_URL --certificate-authority="$(pwd)/kube_ca.pem"
kubectl config set-credentials default-admin --token=$KUBE_TOKEN
kubectl config set-context default-system --cluster=default-cluster --user=default-admin --namespace default
kubectl config use-context default-system
}
And here's how I get each of those env vars from kubectl.
kubectl get serviceaccounts $namespace -o json
The service account will contain the name of it's secret. In my case, with the default namespace, it's
"secrets": [
{
"name": "default-token-655ls"
}
]
Using the name, I get the contents of the secret
kubectl get secrets $secret_name -o json
The secret will contain ca.crt and token fields, which match the $KUBE_CA_PEM and $KUBE_TOKEN in the shell script above.
Finally, use kubectl cluster-info to get the $KUBE_URL value.
Once you run setup_kubectl on CI, your kubectl utility will be authenticated to the namespace you're deploying to.
In Kubernetes 1.6 and GKE, we introduce role based cess control. The authors of your took need to give the service account the ability to get deployments (along with probably quite a few others) to its account creation.
https://kubernetes.io/docs/admin/authorization/rbac/

Is there a way to specific the google cloud platform project for kubectl in the command?

Is there something like:
kubectl get pods --project=PROJECT_ID
I would like not to modify my default gcloud configuration to switch between my staging and production environment.
kubectl saves clusters/contexts in its configuration. If you use the default scripts to bring up the cluster, these entries should've been set for your clutser.
A brief overview of kubectl config:
kubectl config view let you to view the cluster/contexts in
your configuration.
kubectl config set-cluster and kubectl config set-context modifies/adds new entries.
You can use kubectl config use-context to change the default context, and kubectl --context=CONTEXT get pods to switch to a different context for the current command.
You can download credentials for each of your clusters using gcloud container clusters get-credentials which takes the --project flag. Once the credentials are cached locally, you can use the --context flag (as Yu-Ju explains in her answer) to switch between clusters for each command.