generated serviceaccount token is rejected by kube-apiserver - kubernetes

I have one successfully working cluster, with out any problems, I've tried to make a copy of it. It's working basically, except one issue - token generated by apiserver is not valid with error message:
6 handlers.go:37] Unable to authenticate the request due to an error: crypto/rsa: verification error
I have api server started up with following parameters:
kube-apiserver --address=0.0.0.0 --admission_control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota --service-cluster-ip-range=10.116.0.0/23 --client_ca_file=/srv/kubernetes/ca.crt --basic_auth_file=/srv/kubernetes/basic_auth.csv --authorization-mode=AlwaysAllow --tls_cert_file=/srv/kubernetes/server.cert --tls_private_key_file=/srv/kubernetes/server.key --secure_port=6443 --token_auth_file=/srv/kubernetes/known_tokens.csv --v=2 --cors_allowed_origins=.* --etcd-config=/etc/kubernetes/etcd.config --allow_privileged=False
I think I'm missing something but can't find what exactly, any help will be appreciated!

So, apparently it was wrong server.key used by controller manager.
According to kubernetes documentation token is generated by controller manager.
While I was doing copy of the all my configuration, I had to change ipaddress and had to change certificate due to this as well. But controller-manager started with "old" certificate and after the change created wrong keys because server.key.

You can see this below flag for api server, it works for me. Check this.
--insecure-bind-address=${OS_PRIVATE_IPV4}
--bind-address=${OS_PRIVATE_IPV4}
--tls-cert-file=/srv/kubernetes/server.cert
--tls-private-key-file=/srv/kubernetes/server.key
--client-ca-file=/srv/kubernetes/ca.crt
--admission_control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota
--token-auth-file=/srv/kubernetes/known_tokens.csv
--basic-auth-file=/srv/kubernetes/basic_auth.csv
--etcd_servers=http://${OS_PRIVATE_IPV4}:4001
--service-cluster-ip-range=10.10.0.0/16
--logtostderr=true
--v=5

Related

What does "certificate is valid for 10.96.0.1, 10.198.74.71, not 127.0.0.1" mean?

What does this error mean? I have Argo workflows working on my development computer, but when I deploy it, this is what I see. Where do I need to start reading to fix it?
ERROR
Post https://127.0.0.1:6443/apis/argoproj.io/v1alpha1/namespaces/argo/workflows: x509: certificate is valid for 10.96.0.1, 10.198.74.71, not 127.0.0.1
For anyone who comes across a strange error like this, it is (again) an RBAC problem. To fix this error, I updated my kubeconfig to reflect the current clusters and roles.

Creating Kubernetes Endpoint in VSTS generates error

What setting up a new Kubernetes endpoint and clicking "Verify Connection" the error message:
"The Kubconfig does not contain user field. Please check the kubeconfig. " - is always displayed.
Have tried multiple ways of outputting the config file to no avail. I've also copy and pasted many sample config files from the web and all end up with the same issue. Anyone been successful in creating a new endpoint?
This is followed by TsuyoshiUshio/KubernetesTask issue 35
I try to reproduce, however, I can't do it.
I'm not sure, however, I can guess it might the mismatch of the version of the cluster/kubectl which you download by the download task/kubeconfig.
Workaround might be like this:
kubectl version in your local machine and check the current server/client version
specify the same version as the server on the download task. (by default it is 1.5.2)
See the log of your release pipeline which is fail, you can see which kubectl command has been executed, do the same thing on your local machine with fitting your local pc's environment.
The point is, before go to the VSTS, download the kubectl by yourself.
Then, put the kubeconfg on the default folder like ~/.kube/config or set environment variables KUBECONFIG to the binary.
Then execute kubectl get nodes and make sure if it works.
My kubeconfig is different format with yours. If you use AKS, az aks install-cli command and az aks get-credentials command.
Please refer https://learn.microsoft.com/en-us/azure/aks/kubernetes-walkthrough .
If it works locally, the config file must work on the VSTS task environment. (or this task or VSTS has a bug)
I had the same problem on VSTS.
Here is my workaround to get a Service Connection working (in my case to GCloud):
Switched Authentication to "Service Account"
Run the two commands told by the info icon next to the fields Token and Certificate: "Token to authenticate against Kubernetes.
Use the ‘kubectl get serviceaccounts -o yaml’ and ‘kubectl get secret
-o yaml’ commands to get the token."
kubectl get secret -o yaml > kubectl-secret.yaml
Search inside the the file kubectl-secret.yaml the values ca.crt and token
Enter the values inside VSTS to the required fields
The generated config I was using had a duplicate line, removing this corrected the issue for me.
users:
- name: cluster_stuff_here
- name: cluster_stuff_here

Mulesoft - Uh-oh spaghettios! There's nothing here

This error is driving me nuts...
Situation:
I am trying to create a REST api and use a api-gateway proxy to access it. Proxy URL is HTTPS.
The deployment goes through fine. No errors reported in the logs. Worker assigned.
However when I try to access through browser get the "Uh-oh spaghettios! There's nothing here.".
Have tried all the usual things like making the https port dynamic using ${https.port} and using 0.0.0.0 instead of localhost in the http-listener config. But that does not help. Has this something to got to do with the proxy version ?
Any help or pointers will be great!
Make sure you follow Steps 2 from below link
Getting Started with Connectors
All,
Got the resolution. The problem was with the certificate chain. The keystore did not contain intermediate certificates. When added to the keystore the connectivity worked fine.
Only if Mulesoft provided correct errors or detailed logging, I would have saved lot of time over this.
Thanks for your inputs.

Kube-apiserver complains about remote error bad certificate

I reinstalled some nodes and a master. Now on the master I am getting:
Sep 15 04:53:58 master kube-apiserver[803]: I0915 04:53:58.413581 803 logs.go:41] http: TLS handshake error from $ip:54337: remote error: bad certificate
Where $ip is one of the nodes.
So I likely need to delete or recreate certificates. What would the location of those be? Any recommended commands to recreate or remove those or copy them from node to master or vice versa? Whatever gets me past this error message...
Take a look through the Creating Certificates section of authentication.md. It walks you through the certificates that you need to create and how to pass them to the system components, and you should be able to use that to re-generate certificates for your cluster.

Not able to connect to cluster. Facing Certificate signed by unknown authority

I am not sure either what I am trying to do is possible or correct way.
One of my colleague spinup kubernetes gce cluster (with 1 master and 4 minions.) in a project which is shared with me as owner access.
After setup he shared his ~/.kubernetes_auth keys along with .kubecfg.crt, .kubecfg.ca.crt and .kubecfg.key. I copied all of the at my home folder and setup the kubernetes workspace.
I also set the project name as a default project in geconfig. and now I can connect to the master and slaves using 'gcutil ssh --zone us-central1-b kubernetes-master'
But when I try to list of existing pods using 'cluster/kubecfg.sh list pods'
I see
"F1017 21:05:31.037148 18021 kubecfg.go:422] Got request error: Get https://107.178.208.109/api/v1beta1/pods?namespace=default: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "ChangeMe")
I tried to debug from my side but failed to come any conclusion. Any sort of clue will be helpful.
You can also copy the cert files off of the master again. They are located in /usr/share/nginx on the master.
It is probably due to a not implemented feature, see this issue:
https://github.com/GoogleCloudPlatform/kubernetes/issues/1886
you can copy the files from /usr/share/nginx/... on the master
into your home dir and try again.
I figured out a workaround: set the -insecure_skip_tls_verify option
In kubecfg.sh, change the code near the bottom to
else
auth_config=(
"-insecure_skip_tls_verify"
)
fi
Obviously this is insecure and you are putting yourself at risk of a man in the middle attack, etc.