Not Able To Create Pod in Kubernetes - kubernetes

I followed the official Kubernetes installation guide to install Kubernetes on Fedora 22 severs. Everything works out for me during the installation .
After the installation. I could see all my nodes are up-running and connected to the master. However, it kept failing while I try to create a simple pod, according to the 101 guide.
$ create -f pod-nginx.yaml
Error from server: error when creating "pod-nginx.yaml": Pod "nginx" is forbidden: no API token found for service account default/default, retry after the token is automatically created and added to the service account
Do I need to create a API token? If yes, how?
I googled the issue, but without any helpful results. Looks like I am the only one hit into the issue on this planet.
Dose anyone have ideas on this?

The ServiceAccount admission controller prevents pods from being created until their service account in their namespace is initialized.
If the controller-manager is started with the appropriate arguments, it will automatically populate namespaces with a default service account, and auto-create the API token for that service account.
It looks like that guide needs to be updated with the information from this comment:
https://github.com/GoogleCloudPlatform/kubernetes/issues/11355#issuecomment-127378691

openssl genrsa -out /tmp/serviceaccount.key 2048
vim /etc/kubernetes/apiserver:
KUBE_API_ARGS="--service_account_key_file=/tmp/serviceaccount.key"
vim /etc/kubernetes/controller-manager
KUBE_CONTROLLER_MANAGER_ARGS="--service_account_private_key_file=/tmp/serviceaccount.key"
systemctl restart kube-controller-manager.service

Related

Permission denied using kubectl but able to run helm

I am facing permission denied errors when using kubectl for all commands, be get pods or apply, but I am able to use helm and login with k9s to perform destructive actions. I am using the same context for all of these actions.
kubectl get nodes
# error: You must be logged in to the server (Unauthorized)
kubectl apply -f some-manifest.yaml
# error: You must be logged in to the server (the server has asked for the client to provide credentials)
Does anyone have a hint as to why this is happening or what to look further into? I am using a managed k8s on Vultr, a smaller cloud provider.
Don't know what specifically the issue was but I rebuilt my .kube/config file slowly with all my contexts and it ended up working again.
Very strange though that helm worked and kubectl didn't though...
I am pretty sure that this is a "kubernetes context" problem
Check the solution here: helm and kubectl context mismatch
Solution for k9s can be found here: https://k9scli.io/topics/commands/

Problems when upgrade kube-lego to cert-manager

I use kube-lego in 0.1.5-a9592932 version, but this service are deprecated and now the time has come for migrate for cert-manger. When testing migrate I lost the secret "kube-lego-account", but I need this! It's possible forcing generate the secret? I restarted the kube-lego and check the logs and found this:
Attempting to create new secret" context=secret name=kube-lego-account namespace=default
But the secret not was created. How can I do to solve this issue?
let's take an another path, Letsencrypt official docs say that they won't be supporting any longer for below 0.8 versions, so I recommend you to install cert-manager provided by Jetstack, that you can find here, to install the helm chart for it.
The follow this stackoverflow post, for configurations, note that if the api version mentioned in that post doesn't support in case of cluster issuer, then rather use
apiVersion: cert-manager.io/v1alpha2
Note that , the tls secret name mentioned in the certificate will be auto-generated by cert-manager, and it automatically starts an acme-challenge to validate the domain, once you patch that secret name to the TLS in your ingress rule.
It shall solve the issue and the certificate's status will change to ready after the domain verification

GOCD agent registration with kubernetes

I want register kubernetes-elastic-agents with gocd-server. In the doc https://github.com/gocd/kubernetes-elastic-agents/blob/master/install.md
I need kubernetes security token and cluster ca certificate. My Kubernetes is running. How do I create a security token? Where can I find the cluster ca cert?
Jake
There are two answers:
The first is that it's very weird that one would need to manually input those things since they live in a well-known location on disk of any Pod (that isn't excluded via the automountServiceAccountToken field) as described in Accessing the API from a Pod
The second is that if you really do need a statically provisioned token belonging to a ServiceAccount, then you can either retrieve an existing token from the Secret that is created by default for every ServiceAccount, or create a second Secret as described in Manually create a service account API token
The CA cert you requested is present in every Pod in the cluster at the location mentioned in the first link, as well as in the ~/.kube/config of anyone who wishes to access the cluster. kubectl config view -o yaml will show it to you.

Get kubeconfig by ssh into cluster

If I am able to SSH into the master or any nodes in the cluster, is it possible for me to get 1) the kubeconfig file or 2) all information necessary to compose my own kubeconfig file?
You could find configuration on master node under /etc/kubernetes/admin.conf (on v1.8+).
On some versions of kubernetes, this can be found under ~/.kube
I'd be interested in hearing the answer to this as well. But I think it depends on how the authentication is set up. For example,
Minikube uses "client certificate" authentication. If it stores the client.key on the cluster as well, you might construct a kubeconfig file by combining it with the cluster’s CA public key.
GKE (Google Kubernetes Engine) uses authentication on a frontend that's separate from the Kubernetes cluster (masters are hosted separately). You can't ssh into the master, but if it was possible, you still might not be able to construct a token that works against the API server.
However, by default Pods have a service account token that can be used to authenticate to Kubernetes API. So if you SSH into a node and run docker exec into a container managed by Kubernetes, you will see this:
/ # ls run/secrets/kubernetes.io/serviceaccount
ca.crt namespace token
You can combine ca.crt and token to construct a kubeconfig file that will authenticate to the Kubernetes master.
So the answer to your question is yes, if you SSH into a node, you can then jump into a Pod and collect information to compose your own kubeconfig file. (See this question on how to disable this. I think there are solutions to disable it by default as well by forcing RBAC and disabling ABAC, but I might be wrong.)

Kubernetes 1.6+ RBAC: Gain access as role cluster-admin via kubectl

1.6+ sees a lot of changes revolving around RBAC and ABAC. However, what is a little quirky is not being able to access the dashboard etc. by default as previously possible.
Access will result in
User "system:anonymous" cannot proxy services in the namespace "kube-system".: "No policy matched."
Documentation at the k8s docs is plenty, but not really stating how to gain access practically, as creator of a cluster, to become cluster-admin
What is a practical way to authenticate me as cluster-admin?
By far the easiest method is to use the credentials​ from /etc/kubernetes/admin.conf (this is on your master if you used kubeadm) . Run kubectl proxy --kubeconfig=admin.conf on your client and then you can visit http://127.0.0.1:8001/ui from your browser.
You might need to change the master address in admin.conf after you copied to you client machine.