What does "certificate is valid for 10.96.0.1, 10.198.74.71, not 127.0.0.1" mean? - certificate

What does this error mean? I have Argo workflows working on my development computer, but when I deploy it, this is what I see. Where do I need to start reading to fix it?
ERROR
Post https://127.0.0.1:6443/apis/argoproj.io/v1alpha1/namespaces/argo/workflows: x509: certificate is valid for 10.96.0.1, 10.198.74.71, not 127.0.0.1

For anyone who comes across a strange error like this, it is (again) an RBAC problem. To fix this error, I updated my kubeconfig to reflect the current clusters and roles.

Related

Mulesoft - Uh-oh spaghettios! There's nothing here

This error is driving me nuts...
Situation:
I am trying to create a REST api and use a api-gateway proxy to access it. Proxy URL is HTTPS.
The deployment goes through fine. No errors reported in the logs. Worker assigned.
However when I try to access through browser get the "Uh-oh spaghettios! There's nothing here.".
Have tried all the usual things like making the https port dynamic using ${https.port} and using 0.0.0.0 instead of localhost in the http-listener config. But that does not help. Has this something to got to do with the proxy version ?
Any help or pointers will be great!
Make sure you follow Steps 2 from below link
Getting Started with Connectors
All,
Got the resolution. The problem was with the certificate chain. The keystore did not contain intermediate certificates. When added to the keystore the connectivity worked fine.
Only if Mulesoft provided correct errors or detailed logging, I would have saved lot of time over this.
Thanks for your inputs.

Keycloak Verification Error

When I deploy my webapp in Keycloak standalone (user/roles defined), I can log in without a problem, but when I deploy it on Wildfly with Keycloak adapter, I cannot log in, and get the error:
[org.keycloak.adapters.OAuthRequestAuthenticator] failed verification of token:
Token type is incorrect. Expected 'Bearer' but was 'null'
Where is the 'Bearer' token defined?
So I found the the culprit. If somebody else bumps into the same issue: I spent hours of worktime on Keycloak 1.5.0.Final. Yesterday 1.6.0 was released and issue is resolved. The same problem appeared even in the demo examples. So if you are using Wildfly with adapter, stay away from Keycloak 1.5.0
I'm not sure if this helps as I haven't encountered your error message. Have a look at this and see, if it gives you a hint. If not, I'm sorry and good luck :)
Open the admin console and go to > clients
This is where you configure your client and also the access type (confidential, public, bearer-only). Maybe a configuration error here?

generated serviceaccount token is rejected by kube-apiserver

I have one successfully working cluster, with out any problems, I've tried to make a copy of it. It's working basically, except one issue - token generated by apiserver is not valid with error message:
6 handlers.go:37] Unable to authenticate the request due to an error: crypto/rsa: verification error
I have api server started up with following parameters:
kube-apiserver --address=0.0.0.0 --admission_control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota --service-cluster-ip-range=10.116.0.0/23 --client_ca_file=/srv/kubernetes/ca.crt --basic_auth_file=/srv/kubernetes/basic_auth.csv --authorization-mode=AlwaysAllow --tls_cert_file=/srv/kubernetes/server.cert --tls_private_key_file=/srv/kubernetes/server.key --secure_port=6443 --token_auth_file=/srv/kubernetes/known_tokens.csv --v=2 --cors_allowed_origins=.* --etcd-config=/etc/kubernetes/etcd.config --allow_privileged=False
I think I'm missing something but can't find what exactly, any help will be appreciated!
So, apparently it was wrong server.key used by controller manager.
According to kubernetes documentation token is generated by controller manager.
While I was doing copy of the all my configuration, I had to change ipaddress and had to change certificate due to this as well. But controller-manager started with "old" certificate and after the change created wrong keys because server.key.
You can see this below flag for api server, it works for me. Check this.
--insecure-bind-address=${OS_PRIVATE_IPV4}
--bind-address=${OS_PRIVATE_IPV4}
--tls-cert-file=/srv/kubernetes/server.cert
--tls-private-key-file=/srv/kubernetes/server.key
--client-ca-file=/srv/kubernetes/ca.crt
--admission_control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota
--token-auth-file=/srv/kubernetes/known_tokens.csv
--basic-auth-file=/srv/kubernetes/basic_auth.csv
--etcd_servers=http://${OS_PRIVATE_IPV4}:4001
--service-cluster-ip-range=10.10.0.0/16
--logtostderr=true
--v=5

Kube-apiserver complains about remote error bad certificate

I reinstalled some nodes and a master. Now on the master I am getting:
Sep 15 04:53:58 master kube-apiserver[803]: I0915 04:53:58.413581 803 logs.go:41] http: TLS handshake error from $ip:54337: remote error: bad certificate
Where $ip is one of the nodes.
So I likely need to delete or recreate certificates. What would the location of those be? Any recommended commands to recreate or remove those or copy them from node to master or vice versa? Whatever gets me past this error message...
Take a look through the Creating Certificates section of authentication.md. It walks you through the certificates that you need to create and how to pass them to the system components, and you should be able to use that to re-generate certificates for your cluster.

Not able to connect to cluster. Facing Certificate signed by unknown authority

I am not sure either what I am trying to do is possible or correct way.
One of my colleague spinup kubernetes gce cluster (with 1 master and 4 minions.) in a project which is shared with me as owner access.
After setup he shared his ~/.kubernetes_auth keys along with .kubecfg.crt, .kubecfg.ca.crt and .kubecfg.key. I copied all of the at my home folder and setup the kubernetes workspace.
I also set the project name as a default project in geconfig. and now I can connect to the master and slaves using 'gcutil ssh --zone us-central1-b kubernetes-master'
But when I try to list of existing pods using 'cluster/kubecfg.sh list pods'
I see
"F1017 21:05:31.037148 18021 kubecfg.go:422] Got request error: Get https://107.178.208.109/api/v1beta1/pods?namespace=default: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "ChangeMe")
I tried to debug from my side but failed to come any conclusion. Any sort of clue will be helpful.
You can also copy the cert files off of the master again. They are located in /usr/share/nginx on the master.
It is probably due to a not implemented feature, see this issue:
https://github.com/GoogleCloudPlatform/kubernetes/issues/1886
you can copy the files from /usr/share/nginx/... on the master
into your home dir and try again.
I figured out a workaround: set the -insecure_skip_tls_verify option
In kubecfg.sh, change the code near the bottom to
else
auth_config=(
"-insecure_skip_tls_verify"
)
fi
Obviously this is insecure and you are putting yourself at risk of a man in the middle attack, etc.