connect to kubernetes cluster from local machine using kubectl - kubernetes

I have installed a kubernetes cluster on EC2 instances on AWS.
1 master node and 2 worker nodes.
Everything works fine when I connect to the master node and issue commands using kubectl.
But I want to be able to issue kubectl commands from my local machine.
So I copied the contents of .kube/config file from master node to my local machine's .kube/config.
I have only changed the ip address of the server because the original file references to an internal ip. The file looks like this now :
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM1ekNDQWMrZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJeE1URXhNVEUyTXpneE5Gb1hEVE14TVRFd09U4M0xTCkJ1THZGK1VMdHExOHovNG0yZkFEMlh4dmV3emx0cEovOUlFbQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
server: https://35.166.48.257:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: kubernetes-admin
name: kubernetes-admin#kubernetes
current-context: kubernetes-admin#kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
user:
client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURJVENDQWdtZ0F3SUJBZ0lJYkhZQStwL3UvM013RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TVRFeE1URXhOak00TVRSYUZ3MHlNakV4TVRFeE5qTTRNVGRhTURReApGekFWQmdOVkJBb1REbk41YzNSbGJUcHRZWE4wWlhKek1Sa3dGd1lEVlFRREV4QnJkV0psY201bGRHVnpMV0ZrCmJXbHVNSUlCSWpBTkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVCQVFzRkFBT0NBUUVBdjJlVTBzU1cwNDdqUlZKTUQvYm1WK1VwWnRBbU1NVDJpMERNCjhCZjhDSm1WajQ4QlpMVmg4Ly82dnJNQUp6YnE5cStPa3dBSE1iWVQ4TTNHK09RUEdFcHd3SWRDdDBhSHdaRVQKL0hlVnI2eWJtT2VNeWZGNTJ1M3RIS3MxU1I1STM5WkJPMmVSU2lDeXRCVSsyZUlCVFkrbWZDb3JCRWRnTzJBMwpYQVVWVlJxRHVrejZ6OTAyZlJkd29yeWJLaU5mejVWYXdiM3VyQUxKMVBrOFpMNE53QU5vejBEL05HekpNT2ZUCjJGanlPeXcrRWFFMW96UFlRTnVaNFBuM1FWdlFMVTQycU5adGM0MmNKbUszdlBVWHc1LzBYbkQ4anNocHpNbnYKaFZPb2Y2ZGp6YzZMRGNzc1hxVGRYZVdIdURKMUJlcUZDbDliaDhQa1NQNzRMTnE3NGc9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb3dJQkFBS0NBUUVBeVY1TGdGMjFvbVBBWGh2eHlzKzJIUi8xQXpLNThSMkRUUHdYYXZmSjduS1hKczh5CjBETkY5RTFLVmIvM0dwUDROcC84WEltRHFpUHVoN2J1YytYNkp1T0J0bGpwM0w1ZEFjWGxPaTRycWJMR1FBdzUKdG90UU94OHoyMHRLckFTbElUdUFwK3ZVMVR0M25hZ0xoK2JqdHVzV0wrVnBSdDI0d0JYbm93eU10ZW5HRUdLagpKRXJFSmxDc1pKeTRlZWdXVTZ3eDBHUm1TaElsaE9JRE9yenRValVMWVVVNUJJODBEMDVSSzBjeWRtUjVYTFJ1CldIS0kxZ3hZRnBPTlh4VVlOVWMvVU1YbjM0UVdJeE9GTTJtSWd4cG1jS09vY3hUSjhYWWRLV2tndDZoN21rbGkKejhwYjV1VUZtNURJczljdEU3cFhiUVNESlQzeXpFWGFvTzJQa1FJREFRQUJBb0lCQUhhZ1pqb28UZCMGNoaUFLYnh1RWNLWEEvYndzR3RqU0J5MFNFCmtyQ2FlU1BBV0hBVUZIWlZIRWtWb1FLQmdERllwTTJ2QktIUFczRk85bDQ2ZEIzUE1IMHNMSEdCMmN2Y3JZbFMKUFY3bVRhc2Y0UEhxazB3azlDYllITzd0UVg0dlpBVXBVZWZINDhvc1dJSjZxWHorcTEweXA4cDNSTGptaThHSQoyUE9rQmQ0U05IY0habXRUcExEYzhsWG13aXl2Z1RNakNrU0tWd3l5UDVkUlNZZGVWbUdFSDl1OXJZVWtUTkpwCjRzQUJBb0dCQUpJZjA4TWl2d3h2Z05BQThxalllYTQzTUxUVnJuL3l0ck9LU0RqSXRkdm9QYnYrWXFQTnArOUUKdUZONDlHRENtc0UvQUJwclRpS2hyZ0I4aGI4SkM5d3A3RmdCQ25IU0tiOVVpVG1KSDZQcDVYRkNKMlJFODNVNQp0NDBieFE0NXY3VzlHRi94MWFpaW9nVUlNcTkxS21Vb1RUbjZhZHVkMWM5bk5yZmt3cXp3Ci0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg==
~
When I try to use a kubectl command from my local machine I get this error :
Unable to connect to the server: x509: certificate is valid for 10.96.0.1, 172.31.4.108, not 35.166.48.257

This is bcs the kube-api server TLS cert is only valid for 10.96.0.1, 172.31.4.108 and not for 35.166.48.257. There are several options, like to tell kubectl the skip TLS verfiy but i would not re-commend that. The best would be to re-generate the whole PKI on your Cluster.
Both ways are described here
Next time for a kubeadm Cluster you can use --apiserver-cert-extra-sans=EXTERNAL_IP at the cluster init to also add the external IP to the API Server TLS cert.

Related

How to add more nodes in the self signed certificate of Kubernetes Dashboard

I finally managed to resolve my question related to how to add more nodes in the CAs of the Master nodes (How to add extra nodes to the certificate-authority-data from a self signed k8s cluster?).
Now the problem that I am facing is I want to use kubeconfig file e.g. ~/.kube/config to access the Dashboard.
I managed to figured it out by having the following syntax:
$ kubectl config view
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://IP:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: kubernetes-admin
name: kubernetes-admin#kubernetes
current-context: kubernetes-admin#kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
token: REDACTED
The problem that I am having is that I need to use the IP of one of the Master nodes in order to be able to reach the Dashboard. I would like to be able to use the LB IP to reach the Dashboard.
I assume this is related to the same problem that I had before as I can see from the file that the CAs are autogenerated.
args:
- --auto-generate-certificates
- etc etc
.
.
.
Apart from creating the CAs on your self in order to use them is there any option to pass e.g. IP1 / IP2 etc etc in a flag within the file?
Update: I am deploying the Dashboard through the recommended way kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0/aio/deploy/recommended.yaml (Deploying the Dashboard UI). The deployment is on prem but I have configured the cluster with an external loadbalancer (HAProxy) towards the Api and also Ingress and also type: LoadBalancer on Ingress. Everything seems to working as expected apart from the Dashboard UI (through LB IP). I am also using authentication mode authorization-mode: Node,RBAC on the kubeconfig file (if relevant).
I am access the Dashboard through Inress HTTPS e.g. https://dashboard.example.com.
I get the error Not enough data to create auth info structure. Found the token: xxx solution from this question Kubernetes Dashboard access using config file Not enough data to create auth info structure..
If I switch the LB IP with the Master nodes then I can access the UI with the kubeconfig file.
I just updated now to the latest version of the dashboard v2.0.5 is not working with the kubeconfig button / file but it works with the token directly kubernetes/Dashoboard-v2.0.5. With the previous version everything works as described above. No error logs in the pod logs.

Kubernetes Host IP For Vault Authentication with Kubernetes Engine

I was trying to Authenticate Kubernetes with External Vault using Hashicorps tutorial, https://learn.hashicorp.com/vault/identity-access-management/vault-agent-k8s
In below configuration, we have to provide the END POINT to our cluster in K8S_HOST
vault write auth/kubernetes/config \
token_reviewer_jwt="$SA_JWT_TOKEN" \
kubernetes_host="https://$K8S_HOST:8443" \
kubernetes_ca_cert="$SA_CA_CRT"
I have setup the Kubernetes HA Cluster in private subnet and an ALB in frontend. I need help in configuring K8S_HOST end point.
As of now i have generated SSL Certs and recreated dashboard.
Tried exposing the kubernetes-dashboard as node port.
Updated the certificate in ALB which is listening on 443.
But still it not connecting to cluster.
So my doubt is K8S_HOST:8443 is the same end point for kubernetes dashboard or something else?
Proper way to get K8S_HOST details from a cluster in private subnet.
Can someone please help on this? I am struck here.
Use kubectl config view command to view cluster configuration:
$ kubectl config view --flatten --minify
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJd01ETXdOakE0TlRFd05sb1hEVE13TURNd05EQTROVEV3Tmxvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTDRlClg5eWZpN0JhVVlUNmhUcEUvNm02WW5HczlZSHY3SmFMOGxsWWsvOENUVjBRcUk4VjBOYnB5V3ByQjBadmV4ZmMKQ0NTQ2hkYWFlcVBQWUJDckxTSGVoVllZcE1PK2UrMVFkTFN2RmpnZUQ1UHY0NFBqTW1MeFAzVkk0MFVZOXVNNwpCcjRueVRPYnJpWXJaSVhTYjdTbWRTdFg5TUgreVVXclNrRllGSEhnREVRdXF0dFRJZ1pjdUh2MzY3Nkpyc1FuCmI1TlM0ZHJyc0U0NVZUcWYrSXR1NzRSa1VkOUsvcTNHMHN1SlVMZ3AxOUZ4ZXorYTNRenJiUTdyWTlGUEhsSG4KVno1N1dWYmt2cjMzOUxnNWd0VzB4am10Q1hJaGgzNFRqRE1OazNza0VvcFBibjJPcER5STVUMUtOL3Vsa0FmTAptcXJ4bU5VNEVVYy9NcWFoVlVrQ0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFBL3c0OEFkdFU3Tkx2d0k1S2N4Y3hMMitBT0IKV29nakFKNUMwTlBTN1NDR2tWK2d6dlcrZHZVYWVtYTFHUFlJdUJuajBVR2k2QUF5SStES0tiY01iL2dVVUdKQQp0YVpEcFpidU1lZ1JZOVZ2dlpMZXRZQndESzkvWk9lYnF1MGh6eWo4TzduTnJaM3RIb3h6VW1MaVVIU2Jmc0R1CnkvaE9IM0wvUE1mZ0FFaHF5SVZwWGMvQzZCYWNlOEtRSWJMQ0hYZmZjTGhEWDQ0THZYSXVIL1Y3LzN1cHcxWm8KK05NcFY5Sys4TTExNHV2bWdyOHdTNkZHYlltdXFVZy9CTlpRd2FqKzVWMEZ6azZzeHoySTdZSXI3NHVNK3BLRgpMS3lEQzJnK2NXTU5YZTV0S0YrVG5zUXE1eWtNVEJKeHl1bTh5a3VtZTE4MGcyS1o3NzVTdVF1Ni9kND0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
server: https://127.0.0.1:32769 # <<-----------here
name: kind-kind
contexts:
- context:
cluster: kind-kind
user: kind-kind
name: kind-kind
current-context: kind-kind
kind: Config
preferences: {}
users:
- name: kind-kind
... ... ...
Copy the server address and use it as kubernetes_host while configuring Vault kubernetes auth method.
$ vault write auth/kubernetes/config \
token_reviewer_jwt="eyJhbGciOiJSUz....." \
kubernetes_host="https://127.0.0.1:32769" \
kubernetes_ca_cert=#examples/guides/vault-server/ca.crt
N.B.: If the server address does not contain a port number, no need to add them. Keep the address as it is.
Demo sever address for GKE:
server: https://35.203.181.169
Demo server address for DigitalOcean k8s cluster:
server: https://e8dabcb3-**bb-451e****d5.k8s.ondigitalocean.com
This has been resolved.
I have to add the alb dns name to api server certificate and then reboot all node.
Authentication is now working fine.

certificate signed by unknown authority when connect to remote kubernetes cluster using kubectl

I am using kubectl to connect remote kubernetes cluster(v1.15.2),I am copy config from remote server to local macOS:
scp -r root#ip:~/.kube/config ~/.kube
and change the url to https://kube-ctl.example.com,I exposed the api server to the internet:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURvakNDQW9xZ0F3SUJBZ0lVU3FpUlZSU3FEOG1PemRCT1MyRzlJdGE0R2Nrd0RRWUpLb1pJaHZjTkFRRUwKQlFB92FERUxNQWtHQTFVRUJoTUNRMDR4RURBT0JnTlZCQWdUQjBKbGFVcHBibWN4RURBT0JnTlZCQWNUQjBKbAphVXBwYm1jeEREQUtCZ05WQkFvVEEyczRjekVTTUJBR0ExVUVDeE1KTkZCaGNtRmthV2R0TVJNd0VRWURWUVFECkV3cHJkV0psY201bGRHVnpNQ0FYR3RFNU1Ea3hNekUxTkRRd01Gb1lEekl4TVRrd09ESXdNVFUwTkRBd1dqQm8KTVFzd0NRWURWUVFHRXdKRFRqRVFNQTRHQTFVRUNCTUhRbVZwU21sdVp6RVFNQTRHQTFVRUJ4TUhRbVZwU21sdQpaekVNTUFvR0ExVUVDaE1EYXpoek1SSXdFQVlEVlFRTEV3azBVR0Z5WVdScFoyMHhFekFSQmdOVkJBTVRDbXQxClltVnlibVYwWlhNd2dnRWlNQTBHQ1NxR1NJYjNEUUVCQVFVQUE0SUJEd0F3Z2dFS0FvSUJBUUNzOGFFR2R2TUgKb0E1eTduTjVydnAvQkEyTVM1RG1TNWwzQ0p4S3VMOGJ1bkF3alF1c0lTalUxVWlqeVdGOW03VzA3elZJaVJpRwpiYzZGT3JkSEJ2QXgzazBpT2pPRlduTHp1UjdaSFhqQ3lTbDJRam9YN3gzL0l1MERQelNHTXJLSzZETGpTRTk4CkdadEpjUi9OSmpiVFFJc3FXbWFEdUIyc3dmcEc1ZmlZU1A1KzVpcld1TG1pYjVnWnJYeUJJNlJ0dVV4K1NvdW0KN3RDKzJaVG5QdFF0QnFUZHprT3p3THhwZ0Zhd1kvSU1mbHBsaUlMTElOamcwRktxM21NOFpUa0xvNXcvekVmUApHT25GNkNFWlR6bkdrTWc2aUVGenNBcDU5N2lMUDBNTkR4YUNjcTRhdTlMdnMzYkdOZmpqdDd3WkxIVklLa0lGCm44Mk92cExGaElq2kFnTUJBQUdqUWpCQU1BNEdBMVVkRHdFQi93UUVBd0lCQmpBUEJnTlZIUk1CQWY4RUJUQUQKQVFIL01CMEdBMVVkRGdRV0JCUm0yWHpJSHNmVzFjMEFGZU9SLy9Qakc4dWdzREFOQmdrcWhraUc5dzBCQVFzRgpBQU9DQVFFQW1mOUozN3RYTys1dWRmT2RLejdmNFdMZyswbVJUeTBRSEVIblk5VUNLQi9vN2hLUVJHRXI3VjNMCktUeGloVUhvbHY1QzVUdG8zbUZJY2FWZjlvZlp0VVpvcnpxSUFwNE9Od1JpSnQ1Yk94K1d6SW5qN2JHWkhnZjkKSk8rUmNxQnQrUWsrejhTMmJKRG04WFdvMW5WdjJRNU1pUndPdnRIbnRxd3MvTlJ2bHBGV25ISHBEVExjOU9kVwpoMllzWVpEMmV4d0FRVDkxSlExVjRCdklrZGFPeW9USHZ6U2oybThSTzh6b3JBd09kS1NTdG9TZXdpOEhMeGI2ClhmaTRFbjR4TEE3a3pmSHFvcDZiSFF1L3hCa0JzYi9hd29kdDJKc2FnOWFZekxEako3S1RNYlQyRW52MlllWnIKSUhBcjEyTGVCRGRHZVd1eldpZDlNWlZJbXJvVnNRPT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
server: https://k8s-ctl.example.com
name: kubernetes
contexts:
- context:
cluster: kubernetes
namespace: kube-system
user: admin
name: kubernetes
current-context: kubernetes
kind: Config
preferences: {}
users:
- name: admin
user:
when I get cluster pod info in my local Mac:
kubectl get pods --all-namespaces
give this error:
Unable to connect to the server: x509: certificate signed by unknown authority
when I access https://k8s-ctl.example.com in google chrome,the result is:
{
kind: "Status",
apiVersion: "v1",
metadata: { },
status: "Failure",
message: "Unauthorized",
reason: "Unauthorized",
code: 401
}
what should I do to make access remote k8s cluster sucess using kubectl client?
One way I have tried to using this .kube/config generate by command,but get the same result:
apiVersion: v1
clusters:
- cluster:
certificate-authority: ssl/ca.pem
server: https://k8s-ctl.example.com
name: default
contexts:
- context:
cluster: default
user: admin
name: default
current-context: default
kind: Config
preferences: {}
users:
- name: admin
user:
client-certificate: ssl/admin.pem
client-key: ssl/admin-key.pem
I've reproduced your problem and as you created your cluster following kubernetes-the-hard-way, you need to follow these steps to be able to access your cluster from a different console.
First you have to copy the following certificates created while you was bootstraping your cluster to ~/.kube/ directory in your local machine:
ca.pem
admin.pem
admin-key.pem
After copying these files to your local machine, execute the following commands:
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=~/.kube/ca.pem \
--embed-certs=true \
--server=https://${KUBERNETES_PUBLIC_ADDRESS}:6443
kubectl config set-credentials admin \
--client-certificate=~/.kube/admin.pem \
--client-key=~/.kube/admin-key.pem
kubectl config set-context kubernetes-the-hard-way \
--cluster=kubernetes-the-hard-way \
--user=admin
kubectl config use-context kubernetes-the-hard-way
Notice that you have to replace the ${KUBERNETES_PUBLIC_ADDRESS} variable with the remote address to your cluster.
When kubectl interacts with kube API server it will validate the kube API server certificate as well as send the certificate in client-certificate to the kube API server for mutual TLS authentication. I believe the problem is either of below.
the ca that you have used to generate the client-certificate is not the ca that has been used to startup the kube API server.
The ca in certificate-authority-data is not the ca used to generate kube API server certificate.
If you make sure that you are using same ca to generate all the certificates consistently across the board then it should work.

Kubectl Error when accessing Namespaces

I was trying out the Tectonic Kubernetes sandbox setup and according to their documentation:
https://coreos.com/tectonic/docs/latest/tutorials/first-app.html
I did download the kubectl and the corresponding kube-config files, but when I tried to get the namespaces using the following command:
kubectl get namespaces
I get the following error:
The connection to the server localhost:8080 was refused - did you specify the right host or port?
What is this? From where is it picking up this port locahost:8080?
EDIT:
Joe-MacBook-Pro:~ joe$ kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
Joe-MacBook-Pro:~ joe$
I'm lacking some details on your setup, but the problem is basically clear - you're not connected to the cluster.
You should have a kubeconfig file containing the cluster connection information i.e. the context, I assume if you run kubectl config view you'll get nothing.
I'm on windows using git bash, if I run the same command I get:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: REDACTED
server: https://platform-svc-integration.net
name: svc-integration
contexts:
- context:
cluster: svc-integration
user: svc-integration-admin
name: svc-integration-system
current-context: svc-integration-system
kind: Config
preferences: {}
users:
- name: svc-integration-admin
user:
client-certificate: <path>/admin/admin.crt
client-key: <path>/admin/admin.key
basically what I'm trying to say is you need to configure your context, start by doing kubectl config --help to list your options, it's pretty straight forward but if don't manage just refer to the documentation.

Connecting to kubernetes cluster from different kubectl clients

I have installed kubernetes cluster using kops.
From the node where kops install kubectl all works perfect (lets say node A).
I'm trying connect to kubernetes cluster from another server with installed kubectl on it (node B). I have copied ~/.kube from node A to B.
But when I'm trying execute basic command like:
kubectl get pods
I'm getting:
Unable to connect to the server: x509: certificate signed by unknown authority
My config file is:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSU.........
server: https://api.kub.domain.com
name: kub.domain.com
contexts:
- context:
cluster: kub.domain.com
user: kub.domain.com
name: kub.domain.com
current-context: kub.domain.com
kind: Config
preferences: {}
users:
- name: kub.domain.com
user:
client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0F..........
client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVk..........
password: r4ho3rNbYrjqZOhjnu8SJYXXXXXXXXXXX
username: admin
- name: kub.domain.com-basic-auth
user:
password: r4ho3rNbYrjqZOhjnu8SJYXXXXXXXXXXX
username: admin
Appreciate any help
Lets try to trobuleshoot these two.
Unable to connect to the server:
Check and see you have any firewall rules. Is your node running in virtual machine?
x509: certificate signed by unknown authority
can you compare the certificates on both servers getting the same certificates?
curl -v -k $(grep 'server:' ~/.kube/config|sed 's/server://')