How to add more nodes in the self signed certificate of Kubernetes Dashboard - kubernetes

I finally managed to resolve my question related to how to add more nodes in the CAs of the Master nodes (How to add extra nodes to the certificate-authority-data from a self signed k8s cluster?).
Now the problem that I am facing is I want to use kubeconfig file e.g. ~/.kube/config to access the Dashboard.
I managed to figured it out by having the following syntax:
$ kubectl config view
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://IP:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: kubernetes-admin
name: kubernetes-admin#kubernetes
current-context: kubernetes-admin#kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
token: REDACTED
The problem that I am having is that I need to use the IP of one of the Master nodes in order to be able to reach the Dashboard. I would like to be able to use the LB IP to reach the Dashboard.
I assume this is related to the same problem that I had before as I can see from the file that the CAs are autogenerated.
args:
- --auto-generate-certificates
- etc etc
.
.
.
Apart from creating the CAs on your self in order to use them is there any option to pass e.g. IP1 / IP2 etc etc in a flag within the file?
Update: I am deploying the Dashboard through the recommended way kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0/aio/deploy/recommended.yaml (Deploying the Dashboard UI). The deployment is on prem but I have configured the cluster with an external loadbalancer (HAProxy) towards the Api and also Ingress and also type: LoadBalancer on Ingress. Everything seems to working as expected apart from the Dashboard UI (through LB IP). I am also using authentication mode authorization-mode: Node,RBAC on the kubeconfig file (if relevant).
I am access the Dashboard through Inress HTTPS e.g. https://dashboard.example.com.
I get the error Not enough data to create auth info structure. Found the token: xxx solution from this question Kubernetes Dashboard access using config file Not enough data to create auth info structure..
If I switch the LB IP with the Master nodes then I can access the UI with the kubeconfig file.
I just updated now to the latest version of the dashboard v2.0.5 is not working with the kubeconfig button / file but it works with the token directly kubernetes/Dashoboard-v2.0.5. With the previous version everything works as described above. No error logs in the pod logs.

Related

connect to kubernetes cluster from local machine using kubectl

I have installed a kubernetes cluster on EC2 instances on AWS.
1 master node and 2 worker nodes.
Everything works fine when I connect to the master node and issue commands using kubectl.
But I want to be able to issue kubectl commands from my local machine.
So I copied the contents of .kube/config file from master node to my local machine's .kube/config.
I have only changed the ip address of the server because the original file references to an internal ip. The file looks like this now :
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM1ekNDQWMrZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJeE1URXhNVEUyTXpneE5Gb1hEVE14TVRFd09U4M0xTCkJ1THZGK1VMdHExOHovNG0yZkFEMlh4dmV3emx0cEovOUlFbQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
server: https://35.166.48.257:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: kubernetes-admin
name: kubernetes-admin#kubernetes
current-context: kubernetes-admin#kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
user:
client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURJVENDQWdtZ0F3SUJBZ0lJYkhZQStwL3UvM013RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TVRFeE1URXhOak00TVRSYUZ3MHlNakV4TVRFeE5qTTRNVGRhTURReApGekFWQmdOVkJBb1REbk41YzNSbGJUcHRZWE4wWlhKek1Sa3dGd1lEVlFRREV4QnJkV0psY201bGRHVnpMV0ZrCmJXbHVNSUlCSWpBTkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVCQVFzRkFBT0NBUUVBdjJlVTBzU1cwNDdqUlZKTUQvYm1WK1VwWnRBbU1NVDJpMERNCjhCZjhDSm1WajQ4QlpMVmg4Ly82dnJNQUp6YnE5cStPa3dBSE1iWVQ4TTNHK09RUEdFcHd3SWRDdDBhSHdaRVQKL0hlVnI2eWJtT2VNeWZGNTJ1M3RIS3MxU1I1STM5WkJPMmVSU2lDeXRCVSsyZUlCVFkrbWZDb3JCRWRnTzJBMwpYQVVWVlJxRHVrejZ6OTAyZlJkd29yeWJLaU5mejVWYXdiM3VyQUxKMVBrOFpMNE53QU5vejBEL05HekpNT2ZUCjJGanlPeXcrRWFFMW96UFlRTnVaNFBuM1FWdlFMVTQycU5adGM0MmNKbUszdlBVWHc1LzBYbkQ4anNocHpNbnYKaFZPb2Y2ZGp6YzZMRGNzc1hxVGRYZVdIdURKMUJlcUZDbDliaDhQa1NQNzRMTnE3NGc9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb3dJQkFBS0NBUUVBeVY1TGdGMjFvbVBBWGh2eHlzKzJIUi8xQXpLNThSMkRUUHdYYXZmSjduS1hKczh5CjBETkY5RTFLVmIvM0dwUDROcC84WEltRHFpUHVoN2J1YytYNkp1T0J0bGpwM0w1ZEFjWGxPaTRycWJMR1FBdzUKdG90UU94OHoyMHRLckFTbElUdUFwK3ZVMVR0M25hZ0xoK2JqdHVzV0wrVnBSdDI0d0JYbm93eU10ZW5HRUdLagpKRXJFSmxDc1pKeTRlZWdXVTZ3eDBHUm1TaElsaE9JRE9yenRValVMWVVVNUJJODBEMDVSSzBjeWRtUjVYTFJ1CldIS0kxZ3hZRnBPTlh4VVlOVWMvVU1YbjM0UVdJeE9GTTJtSWd4cG1jS09vY3hUSjhYWWRLV2tndDZoN21rbGkKejhwYjV1VUZtNURJczljdEU3cFhiUVNESlQzeXpFWGFvTzJQa1FJREFRQUJBb0lCQUhhZ1pqb28UZCMGNoaUFLYnh1RWNLWEEvYndzR3RqU0J5MFNFCmtyQ2FlU1BBV0hBVUZIWlZIRWtWb1FLQmdERllwTTJ2QktIUFczRk85bDQ2ZEIzUE1IMHNMSEdCMmN2Y3JZbFMKUFY3bVRhc2Y0UEhxazB3azlDYllITzd0UVg0dlpBVXBVZWZINDhvc1dJSjZxWHorcTEweXA4cDNSTGptaThHSQoyUE9rQmQ0U05IY0habXRUcExEYzhsWG13aXl2Z1RNakNrU0tWd3l5UDVkUlNZZGVWbUdFSDl1OXJZVWtUTkpwCjRzQUJBb0dCQUpJZjA4TWl2d3h2Z05BQThxalllYTQzTUxUVnJuL3l0ck9LU0RqSXRkdm9QYnYrWXFQTnArOUUKdUZONDlHRENtc0UvQUJwclRpS2hyZ0I4aGI4SkM5d3A3RmdCQ25IU0tiOVVpVG1KSDZQcDVYRkNKMlJFODNVNQp0NDBieFE0NXY3VzlHRi94MWFpaW9nVUlNcTkxS21Vb1RUbjZhZHVkMWM5bk5yZmt3cXp3Ci0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg==
~
When I try to use a kubectl command from my local machine I get this error :
Unable to connect to the server: x509: certificate is valid for 10.96.0.1, 172.31.4.108, not 35.166.48.257
This is bcs the kube-api server TLS cert is only valid for 10.96.0.1, 172.31.4.108 and not for 35.166.48.257. There are several options, like to tell kubectl the skip TLS verfiy but i would not re-commend that. The best would be to re-generate the whole PKI on your Cluster.
Both ways are described here
Next time for a kubeadm Cluster you can use --apiserver-cert-extra-sans=EXTERNAL_IP at the cluster init to also add the external IP to the API Server TLS cert.

How to add new cluster in ArgoCD (use config file of Rancher)? - the server has asked for the client to provide credentials

I want to add a new cluster in addition to the default cluster on ArgoCD but when I add it, I get an error:
FATA[0001] rpc error: code = Unknown desc = REST config invalid: the server has asked for the client to provide credentials
I use the command argocd cluster add cluster-name
I download config file k8s of Rancher.
Thanks!
I solved my problem but welcome other solutions from everyone :D
First, create a secret with the following content:
apiVersion: v1
kind: Secret
metadata:
namespace: argocd # same namespace of argocd-app
name: mycluster-secret
labels:
argocd.argoproj.io/secret-type: cluster
type: Opaque
stringData:
name: cluster-name # Get from clusters - name field in config k8s file.
server: https://mycluster.com # Get from clusters - name - cluster - server field in config k8s file.
config: |
{
"bearerToken": "<authentication token>",
"tlsClientConfig": {
"insecure": false,
"caData": "<base64 encoded certificate>"
}
}
bearerToken - Get from users - user - token field in config k8s file.
caData - Get from clusters - name - cluster - certificate-authority-data field in config k8s file.
Then, apply this yaml file and the new cluster will be automatically added to ArgoCD.
I found the solution on github:
https://gist.github.com/janeczku/b16154194f7f03f772645303af8e9f80

Kubernetes Host IP For Vault Authentication with Kubernetes Engine

I was trying to Authenticate Kubernetes with External Vault using Hashicorps tutorial, https://learn.hashicorp.com/vault/identity-access-management/vault-agent-k8s
In below configuration, we have to provide the END POINT to our cluster in K8S_HOST
vault write auth/kubernetes/config \
token_reviewer_jwt="$SA_JWT_TOKEN" \
kubernetes_host="https://$K8S_HOST:8443" \
kubernetes_ca_cert="$SA_CA_CRT"
I have setup the Kubernetes HA Cluster in private subnet and an ALB in frontend. I need help in configuring K8S_HOST end point.
As of now i have generated SSL Certs and recreated dashboard.
Tried exposing the kubernetes-dashboard as node port.
Updated the certificate in ALB which is listening on 443.
But still it not connecting to cluster.
So my doubt is K8S_HOST:8443 is the same end point for kubernetes dashboard or something else?
Proper way to get K8S_HOST details from a cluster in private subnet.
Can someone please help on this? I am struck here.
Use kubectl config view command to view cluster configuration:
$ kubectl config view --flatten --minify
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJd01ETXdOakE0TlRFd05sb1hEVE13TURNd05EQTROVEV3Tmxvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTDRlClg5eWZpN0JhVVlUNmhUcEUvNm02WW5HczlZSHY3SmFMOGxsWWsvOENUVjBRcUk4VjBOYnB5V3ByQjBadmV4ZmMKQ0NTQ2hkYWFlcVBQWUJDckxTSGVoVllZcE1PK2UrMVFkTFN2RmpnZUQ1UHY0NFBqTW1MeFAzVkk0MFVZOXVNNwpCcjRueVRPYnJpWXJaSVhTYjdTbWRTdFg5TUgreVVXclNrRllGSEhnREVRdXF0dFRJZ1pjdUh2MzY3Nkpyc1FuCmI1TlM0ZHJyc0U0NVZUcWYrSXR1NzRSa1VkOUsvcTNHMHN1SlVMZ3AxOUZ4ZXorYTNRenJiUTdyWTlGUEhsSG4KVno1N1dWYmt2cjMzOUxnNWd0VzB4am10Q1hJaGgzNFRqRE1OazNza0VvcFBibjJPcER5STVUMUtOL3Vsa0FmTAptcXJ4bU5VNEVVYy9NcWFoVlVrQ0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFBL3c0OEFkdFU3Tkx2d0k1S2N4Y3hMMitBT0IKV29nakFKNUMwTlBTN1NDR2tWK2d6dlcrZHZVYWVtYTFHUFlJdUJuajBVR2k2QUF5SStES0tiY01iL2dVVUdKQQp0YVpEcFpidU1lZ1JZOVZ2dlpMZXRZQndESzkvWk9lYnF1MGh6eWo4TzduTnJaM3RIb3h6VW1MaVVIU2Jmc0R1CnkvaE9IM0wvUE1mZ0FFaHF5SVZwWGMvQzZCYWNlOEtRSWJMQ0hYZmZjTGhEWDQ0THZYSXVIL1Y3LzN1cHcxWm8KK05NcFY5Sys4TTExNHV2bWdyOHdTNkZHYlltdXFVZy9CTlpRd2FqKzVWMEZ6azZzeHoySTdZSXI3NHVNK3BLRgpMS3lEQzJnK2NXTU5YZTV0S0YrVG5zUXE1eWtNVEJKeHl1bTh5a3VtZTE4MGcyS1o3NzVTdVF1Ni9kND0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
server: https://127.0.0.1:32769 # <<-----------here
name: kind-kind
contexts:
- context:
cluster: kind-kind
user: kind-kind
name: kind-kind
current-context: kind-kind
kind: Config
preferences: {}
users:
- name: kind-kind
... ... ...
Copy the server address and use it as kubernetes_host while configuring Vault kubernetes auth method.
$ vault write auth/kubernetes/config \
token_reviewer_jwt="eyJhbGciOiJSUz....." \
kubernetes_host="https://127.0.0.1:32769" \
kubernetes_ca_cert=#examples/guides/vault-server/ca.crt
N.B.: If the server address does not contain a port number, no need to add them. Keep the address as it is.
Demo sever address for GKE:
server: https://35.203.181.169
Demo server address for DigitalOcean k8s cluster:
server: https://e8dabcb3-**bb-451e****d5.k8s.ondigitalocean.com
This has been resolved.
I have to add the alb dns name to api server certificate and then reboot all node.
Authentication is now working fine.

kubernetes: Authentication to ui with default config file fails

I have successfully set up a kubernetes cluster on AWS using kops and the following commands:
$ kops create cluster --name=<my_cluster_name> --state=s3://<my-state-bucket> --zones=eu-west-1a --node-count=2 --node-size=t2.micro --master-size=t2.small --dns-zone=<my-cluster-dns>
$ kops update cluster <my-cluster-name> --yes
When accessing the dashboard, I am prompted to either enter a token or
Please select the kubeconfig file that you have created to configure access to the cluster.
When creating the cluster, ~/.kube/config was created that has the following form:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data:
<some_key_or_token_here>
server: https://api.<my_cluster_url>
name: <my_cluster_name>
contexts:
- context:
cluster: <my_cluster_name>
user: <my_cluster_name>
name: <my_cluster_name>
current-context: <my_cluster_name>
kind: Config
preferences: {}
users:
- name: <my_cluster_name>
user:
as-user-extra: {}
client-certificate-data:
<some_key_or_certificate>
client-key-data:
<some_key_or_certificate>
password: <password>
username: admin
- name:<my-cluster-url>-basic-auth
user:
as-user-extra: {}
password: <password>
username: admin
Why when pointing the kubernetes ui to the above file, I get
Authentication failed. Please try again.
I tried the same and had the same problem. It turns out that kops creates a certificate based authentication. Certificate based authentication can't be used on the web UI interface. Instead, I tried using the token based authentication. Next question, where do you find the token?
kubectl describe secret
This will show you the default token for the cluster. I assume this is very bad security practice but if you're using the UI to improve your learning and understanding then it will get you moving in the right direction.
This Dashboard wiki page is about authentication. That's where I discovered how to do it.
In order to enable basic auth in Dashboard --authentication-mode=basic flag has to be provided. By default it is set to --authentication-mode=token
To get the token or understand more about access control please refer here

Kubectl Error when accessing Namespaces

I was trying out the Tectonic Kubernetes sandbox setup and according to their documentation:
https://coreos.com/tectonic/docs/latest/tutorials/first-app.html
I did download the kubectl and the corresponding kube-config files, but when I tried to get the namespaces using the following command:
kubectl get namespaces
I get the following error:
The connection to the server localhost:8080 was refused - did you specify the right host or port?
What is this? From where is it picking up this port locahost:8080?
EDIT:
Joe-MacBook-Pro:~ joe$ kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
Joe-MacBook-Pro:~ joe$
I'm lacking some details on your setup, but the problem is basically clear - you're not connected to the cluster.
You should have a kubeconfig file containing the cluster connection information i.e. the context, I assume if you run kubectl config view you'll get nothing.
I'm on windows using git bash, if I run the same command I get:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: REDACTED
server: https://platform-svc-integration.net
name: svc-integration
contexts:
- context:
cluster: svc-integration
user: svc-integration-admin
name: svc-integration-system
current-context: svc-integration-system
kind: Config
preferences: {}
users:
- name: svc-integration-admin
user:
client-certificate: <path>/admin/admin.crt
client-key: <path>/admin/admin.key
basically what I'm trying to say is you need to configure your context, start by doing kubectl config --help to list your options, it's pretty straight forward but if don't manage just refer to the documentation.