kubectl unable to recognize "dashboard/deployment.yml" - kubernetes

I am getting the following error when i try to deploy a kubernetes service using my bitbucket pipeline to my kubernetes cluster. I am using deploying services method to deploy the service which works fine on my local machine so i am not able to reproduce the issue.
Is it a certificate issue or some configuration issue ?
How can i resolve this ?
1s
+ kubectl apply -f dashboard/
unable to recognize "dashboard/deployment.yml": Get https://kube1.mywebsitedomain.com:6443/api?timeout=32s: x509: certificate is valid for kube1, kubernetes, kubernetes.default, kubernetes.default.svc, kubernetes.default.svc.cluster.local, not kube1.mywebsitedomain.com
unable to recognize "dashboard/ingress.yml": Get https://kube1.mywebsitedomain.com:6443/api?timeout=32s: x509: certificate is valid for kube1, kubernetes, kubernetes.default, kubernetes.default.svc, kubernetes.default.svc.cluster.local, not kube1.mywebsitedomain.com
unable to recognize "dashboard/secret.yml": Get https://kube1.mywebsitedomain.com:6443/api?timeout=32s: x509: certificate is valid for kube1, kubernetes, kubernetes.default, kubernetes.default.svc, kubernetes.default.svc.cluster.local, not kube1.mywebsitedomain.com
unable to recognize "dashboard/service.yml": Get https://kube1.mywebsitedomain.com:6443/api?timeout=32s: x509: certificate is valid for kube1, kubernetes, kubernetes.default, kubernetes.default.svc, kubernetes.default.svc.cluster.local, not kube1.mywebsitedomain.com
Before running the apply command I did set the cluster using the kubectl config and i get the following on the console.
+ kubectl config set-cluster kubernetes --server=https://kube1.mywebsitedomain.com:6443
Cluster "kubernetes" set.

It was the certificate issue. Using the right certificate will definitely solve this problem but in my case the certificate verification wasn't necessary as secure connection is not required for this spike.
So here is my work around
I used the flag --insecure-skip-tls-verify with kubectl and it worked fine
+ kubectl --insecure-skip-tls-verify apply -f dashboard/
deployment.extensions/kubernetes-dashboard unchanged
ingress.extensions/kubernetes-dashboard unchanged
secret/kubernetes-dashboard-auth unchanged
service/kubernetes-dashboard unchanged

Related

What causes x509 cert unknown authority in some Kubernetes clusters when using the Hashicorp Vault CLI?

I'm trying to deploy an instance of HashiCorp Vault with TLS and integrated storage using the official Helm chart. I've run through the official tutorial using minikube without any issues. I also tested this tutorial with a cluster created with kind. The tutorial went as expected on both minikube and kind, however when I tried on a production cluster created with TKGI (Tanzu Kubernetes Grid Integrated) I ran into x509 errors running vault commands in the server pods. I can get by some of them by using -tls-skip-verify, but what may be different between these two clusters to cause the warning? It seems to be causing additional problems when I try to join the replicas to the raft pool.
Here's an example showing the x509 error,
bash-3.2$ kubectl exec -n vault vault-0 -- vault operator init \
> -key-shares=1 \
> -key-threshold=1 \
> -format=json > /tmp/vault/cluster-keys.json
Get "https://127.0.0.1:8200/v1/sys/seal-status": x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "ca")
Is there something that could be updated on the TKGI clusters so that these x509 errors could be avoided?

Terraform dial tcp 192.xx.xx.xx:443: i/o timeout error

I am trying to implement CI / CD using GitLab + Terraform to K8S Cluster and K8S Control Plane (Master node) was setup on CentOS
However, Pipeline job fails with the following error
Error: Failed to get existing workspaces: Get "https://192.xx.xx.xx/api/v1/namespaces/default/secrets?labelSelector=tfstate%3Dtrue": dial tcp 192.xx.xx.xx:443: i/o timeout
From the error mentioned above (default/secrets?labelSelector=tfstate%3Dtrue), I assume the error is related to missing 'terraform secret' on default namespace
Example (Terraform secret taken from my Windows)
PS C:\> kubectl get secret
NAME TYPE DATA AGE
default-token-7mzv6 kubernetes.io/service-account-token 3 27d
tfstate-default-state Opaque 1 15h
However, I am not sure which process would create 'tfsecret' or should we create it manually ?
Kindly let me know if I my understanding is wrong and had I missed anything else
EDIT
The issue mentioned above occurred because existing Gitlab-runner was on a different subnet (eg 172.xx.xx.xx instead of 192.xx.xx.xx)
I was asked to use a different Gitlab-runner which runs on the same subnet and now it throws the following error
Error: Failed to get existing workspaces: Get "https://192.xx.xx.xx:6443/api/v1/namespaces/default/secrets?labelSelector=tfstate%3Dtrue": x509: certificate signed by unknown authority
Now, I am bit confused whether the certificate-issue is between GitLab-Runner and Gitlab-Server or Gitlab-Server and K8S Cluster or something else
You have configured Kubernetes as the remote state backend for your Terraform configuration. The error is, that the backend is trying to query existing secrets to determine what workspaces are configured. The x509: certificate signed by unknown authority indicates, that the KUBECONFIG the remote state backend uses does not match the CA of the API server you're connecting to.
If the runners are K8s pods themselves, make sure you provide a KUBECONFIG that matches your target cluster and that the remote state does not configure itself as in-cluster by reading the service account token every K8s pod has - which in most cases will only work for the cluster the pod is running on.
You don't provide enough information to be more specific. But big picture, you have to configure the state backend, and any provider that connect to K8s. Theoretically, the state backend secrets and the K8s resources do not have to be on the same cluster. Meaning, you may have to have different configuration for state backend and K8s providers.

kubectl x509: certificate signed by unknown authority

We have running instance on GKE. Till this afternoon we started to receive "x509: certificate signed by unknown authority" error from new created kubernetes cluster. The kubectl is working with old clusters but not with new ones.
What we tried:
gcloud update
kubectl update
gcloud re authenticate
clean gcloud install & auth
.kube/config cert remove & gcloud container clusters get-credentials
remove and add new clusters
Thanks.

Deploying an app to gke from CI

I use gitlab for my CI, they host it and i have my own runners.
I have a k8s cluster running in gke.
I want to use kubectl apply to deploy new versions of my containers.
This all works from my local machine because it uses my google account.
I tried setting this all up as suggested by k8s and gitlab
1. copy over the ca.crt
2. copy over the token
- echo "$KUBE_CA_PEM" > kube_ca.pem
- kubectl config set-cluster default-cluster --server=$KUBE_URL --certificate-authority="$(pwd)/kube_ca.pem"
- kubectl config set-credentials default-admin --token=$KUBE_TOKEN
- kubectl config set-context default-system --cluster=default-cluster --user=default-admin
- kubectl config use-context default-system
When i do this it fails with x509: certificate signed by unknown authority
I tried going to the google cloud console > cluster > show credentials and instead of the token specify the username and password that it shows me there, this fails with the same error.
Finally i tried using the --insecure-skip-tls-verify=true but then it complains error: You must be logged in to the server (the server has asked for the client to provide credentials)
Any Help would be appreciated.
The cause of this problem was an incorrect server url. The server needs to be the one defined on the cluster information page in the google cloud console. You will find an Endpoing ip address.

Getting "x509: certificate signed by unknown authority" even with "--insecure-skip-tls-verify" option in Kubernetes

I have a private Docker image registry running on a Linux VM (10.78.0.228:5000) and a Kubernetes master running on a different VM running Centos Linux 7.
I used the below command to create a POD:
kubectl create --insecure-skip-tls-verify -f monitorms-rc.yml
I get this:
sample monitorms-mmqhm 0/1 ImagePullBackOff 0 8m
and upon running:
kubectl describe pod monitorms-mmqhm --namespace=sample
Warning Failed Failed to pull image "10.78.0.228:5000/monitorms":
Error response from daemon: {"message":"Get
https://10.78.0.228:5000/v1/_ping: x509: certificate signed by unknown
authority"}
Isn't Kubernetes supposed to ignore the server certificate for all operations during POD creation when the --insecure-skip-tls-verify is passed?
If not, how do I make it ignore the tls verification while pulling the docker image?
PS:
Kubernetes version :
Client Version: v1.5.2
Server Version: v1.5.2
I have raised this issue here: https://github.com/kubernetes/kubernetes/issues/43924
The issue you're seeing is actually a docker issue. Using --insecure-skip-tls-verify is a valid arg to kubectl, but it only deals with the connecition between kubectl and the kubernetes API server. The error you're seeing is actually because the docker daemon cannot login to the private registry because the cert it's using in unsigned.
Have a look at the Docker insecure registry docs and this should solve your problem.