How to connect to Alibaba Cloud cluster use kubeconfig - kubernetes

I have created Kubernetes on Alibaba Cloud and would like to control from client, such as kube master master/nodes kubectl, kubernetes-dashboard, deploy (manifests) from local to cloud, etc without SSH.
I know that we can use kubeconfig, but no idea for it now, please help me more, thanks.

If you created a cluster using kubeadm for example, you will need to enter the instance through SSH and download the kube-apiserver client certificates and CA from /etc/kubernetes/pki.
Once you have them, you can add the configuration to kubeconfig using these commands (based on https://github.com/kelseyhightower/kubernetes-the-hard-way/blob/master/docs/10-configuring-kubectl.md). Make sure you replace the IP_ADDRESS_OF_YOUR_CLUSTER, CLIENT_CERTIFICATE, CLIENT_KEY placeholders (instead of admin you can choose another name for the credentials):
kubectl config set-cluster your cluster \
--certificate-authority=CA_CERTIFICATE \
--embed-certs=true \
--server=https://IP_ADDRESS_OF_YOUR_CLUSTER:6443
kubectl config set-credentials admin \
--client-certificate=CLIENT_CERTIFICATE \
--client-key=CLIENT_KEY
kubectl config set-context your-cluster-context \
--cluster=your-cluster \
--user=admin
If you get authentication errors, then you used the incorrect certificates.
In addition, make sure that you open port 6443 in your cloud firewall, otherwise you will not be able to access.

Related

Kubernetes cluster in GCloud with koops: public key denied

I am deploying a 2-node Kubernetes cluster on GCloud using kops with the following commands:
kops create cluster part1.k8s.local --zones europe-west3-a --node-count 2 --node-image ubuntu-os-cloud/ubuntu-2004-focal-v20210129 --node-size "e2-standard-2" --ssh-public-key ~/.ssh/id_rsa.pub --state ${KOPS_STATE_STORE}/ --project=${PROJECT}
kops update cluster --name part1.k8s.local --yes --admin
I then wait for the cluster to be ready and get the external IP of one of the nodes using:
kubectl get nodes -o wide
However when I try to login to the node I get:
ssh -i ~/.ssh/id_rsa admin#<PUBLIC_IP>
admin#<PUBLIC_IP>: Permission denied (publickey).
Checking the permissions the nodes should be able to accept SSH connections and I can connect to the VMs using the GCloud UI.
What am I doing wrong? Thanks!
The answer can be found here: https://github.com/kubernetes/kops/issues/10770
I've encounter the issue when I tested some scenarios with SSH keys (add, remove, overwrite, etc).
When you are logging to GKE console, your ssh keys are stored in ~/.ssh. If folder it's empty, those keys will be created ocne you will connect to VM (google_compute_engine and google_compute_engine.pub).
$ ls ~/.ssh
google_compute_engine google_compute_engine.pub google_compute_known_hosts known_hosts
Information about SSH Key is also stored in your project. You can find it in Navigation Menu > Compute Engine > Metadata. Next select SSH Keys tab to view instance SSH keys.
Additional information about SSH Keys can be found in Managing SSH keys in metadata guide.
If you will encounter this kind of issue, you can remove SSH key from UI, remove google_compute_engine and google_compute_engine.pub. While you want to SSH to machine, GKE will ask you to create new SSH key and issue with Permission denied (publickey) should be fixed.
Commands which should be used to ssh to GCE vm is gcloud ssh
gcloud compute ssh <instance-name> --zone <zone>
Why?
gcloud compute ssh is a thin wrapper around the ssh(1) command that takes care of authentication and the translation of the instance name into an IP address.
In addition, if you will encounter other SSH issues on GKE you can check Troubleshooting SSH guide.

Validate Cluster - api/v1/nodes: http: server gave HTTP response to HTTPS client

On my ubuntu 18.04 aws server I try to create cluster via kops.
kops create cluster \
--name=asdf.com \
--state=s3://asdf \
--zones=eu-west-1a \
--node-count=1 \
--node-size=t2.micro \
--master-size=t2.micro \
--master-count=1 \
--dns-zone=asdf.com \
--ssh-public-key=~/.ssh/id_rsa.pub
kops update cluster --name asdf.com
Succesfully Updated my cluster.
But when i try to validate and try to
kubectl get nodes
I got the error : Server gave http response to https server
kops validate cluster --name asdf.com
Validation failed: unexpected error during validation: error listing nodes: Get https://api.asdf.com/api/v1/nodes: http: server gave HTTP response to HTTPS client
Error
I could’nt solve this.
I tried
kubectl config set-cluster asdf.com --insecure-skip-tls-verify=true
but it didnt work.
Please can you help?
t2.micro instances may be too small for a control plane nodes. They will certainly be very slow in booting properly. You can try omitting that flag (i.e use the default size) and see if that boots up properly.
Tip: use kops validate cluster --wait=30m as it may provide more clues to what is wrong.
Except for the instance size, the command above looks good. But if you want to dif deeper, you can have a look at https://kops.sigs.k8s.io/operations/troubleshoot/

Using Cloud Shell to Access a Private Kubernetes Cluster in GCP

The following link https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters talks about the setting up of a private GKE cluster in a separate custom VPC. The Terraform code that creates the cluster and VPCs are available from https://github.com/rajtmana/gcp-terraform/blob/master/k8s-cluster/main.tf Cluster creation completed and I wanted to use some kubectl commands from the Google Cloud Shell. I used the following commands
$ gcloud container clusters get-credentials mservice-dev-cluster --region europe-west2
$ gcloud container clusters update mservice-dev-cluster \
> --region europe-west2 \
> --enable-master-authorized-networks \
> --master-authorized-networks "35.241.216.229/32"
Updating mservice-dev-cluster...done.
ERROR: (gcloud.container.clusters.update) Operation [<Operation
clusterConditions: []
detail: u'Patch failed'
$ gcloud container clusters update mservice-dev-cluster \
> --region europe-west2 \
> --enable-master-authorized-networks \
> --master-authorized-networks "172.17.0.2/32"
Updating mservice-dev-cluster...done.
Updated [https://container.googleapis.com/v1/projects/protean-
XXXX/zones/europe-west2/clusters/mservice-dev-cluster].
To inspect the contents of your cluster, go to:
https://console.cloud.google.com/kubernetes/workload_/gcloud/europe-
west2/mservice-dev-cluster?project=protean-XXXX
$ kubectl config current-context
gke_protean-XXXX_europe-west2_mservice-dev-cluster
$ kubectl get services
Unable to connect to the server: dial tcp 172.16.0.2:443: i/o timeout
When I give the public IP of the Cloud Shell, it says that public IP is not allowed with error message given above. If I give the internal IP of Cloud Shell starting with 172, the connection is timing out as well. Any thoughts? Appreciate the help.
Google suggest creating a VM within the same network as the cluster and then accessing that via SSH in the cloud shell and running kubectl commands from there:
https://cloud.google.com/solutions/creating-kubernetes-engine-private-clusters-with-net-proxies
Try to perform the following
gcloud container clusters get-credentials [CLUSTER_NAME]
And confirm that kubectl is using the right credentials:
gcloud auth application-default login

Shell (ssh) into Azure AKS (Kubernetes) cluster worker node

I have a Kubernetes cluster in Azure using AKS and I'd like to 'login' to one of the nodes. The nodes do not have a public IP.
Is there a way to accomplish this?
The procedure is longly decribed in an article of the Azure documentation:
https://learn.microsoft.com/en-us/azure/aks/ssh. It consists of running a pod that you use as a relay to ssh into the nodes, and it works perfectly fine:
You probably have specified the ssh username and public key during the cluster creation. If not, you have to configure your node to accept them as the ssh credentials:
$ az vm user update \
--resource-group MC_myResourceGroup_myAKSCluster_region \
--name node-name \
--username theusername \
--ssh-key-value ~/.ssh/id_rsa.pub
To find your nodes names:
az vm list --resource-group MC_myResourceGroup_myAKSCluster_region -o table
When done, run a pod on your cluster with an ssh client inside, this is the pod you will use to ssh to your nodes:
kubectl run -it --rm my-ssh-pod --image=debian
# install ssh components, as their is none in the Debian image
apt-get update && apt-get install openssh-client -y
On your workstation, get the name of the pod you just created:
$ kubectl get pods
Add your private key into the pod:
$ kubectl cp ~/.ssh/id_rsa pod-name:/id_rsa
Then, in the pod, connect via ssh to one of your node:
ssh -i /id_rsa theusername#10.240.0.4
(to find the nodes IPs, on your workstation):
az vm list-ip-addresses --resource-group MC_myAKSCluster_myAKSCluster_region -o table
This Gist and this page have pretty good explanations of how to do it. Sshing into the nodes and not shelling into the pods/containers.
you can use this instead of SSH. This will create a tiny priv pod and use nsenter to access the noed.
https://github.com/mohatb/kubectl-wls

Access kubernetes secure API after running with docker

I've created a kubenetes cluster on my Mac with docker-machine, following the documentation here:
http://kubernetes.io/docs/getting-started-guides/docker/
I can access the normal api from inside the instance on 127.0.0.1:8080, but I want to access it externally from my macbook. I know there is a secure port :6443, but I'm unsure how to set up the credentials to access this port.
There are lots of instructions on how to do it on custom installs of kubernetes, but I don't know how to do it inside the docker containers I'm running.
Likely, you will want to use Virtual Box's port forwarding capabilities. An example from the documentation:
VBoxManage modifyvm "MyVM" --natpf1 "k8srule,tcp,,6443,,6443"
This forwards port 6443 on all hosts interfaces to port 6443 of the guest. Port forwarding can also be configured through the VirtualBox UI.
It's like a workaround but most of the time, I think KubeOnDocker setup is for developper that don't need the credentials mecanism :
When you start the KubeOnDocker, --config=/etc/kubernetes/manifests point to master.json. If you look the apiserver start command, you will see that --insecure-bind-address is 127.0.0.1. If you use --config=/etc/kubernetes/manifests-multi it will point to master-multi.json, --insecure-bind-address will be 0.0.0.0 and the apiserver will be accessible from everywhere.
Note that you will need to start etcd with manifests-multi.
# Not tested start
docker run \
-d \
--net=host \
gcr.io/google_containers/etcd:2.2.1 \
/usr/local/bin/etcd \
--listen-client-urls=http://127.0.0.1:4001 \
--advertise-client-urls=http://127.0.0.1:4001 \
--data-dir=/var/etcd/data