Kubernetes cluster in GCloud with koops: public key denied - kubernetes

I am deploying a 2-node Kubernetes cluster on GCloud using kops with the following commands:
kops create cluster part1.k8s.local --zones europe-west3-a --node-count 2 --node-image ubuntu-os-cloud/ubuntu-2004-focal-v20210129 --node-size "e2-standard-2" --ssh-public-key ~/.ssh/id_rsa.pub --state ${KOPS_STATE_STORE}/ --project=${PROJECT}
kops update cluster --name part1.k8s.local --yes --admin
I then wait for the cluster to be ready and get the external IP of one of the nodes using:
kubectl get nodes -o wide
However when I try to login to the node I get:
ssh -i ~/.ssh/id_rsa admin#<PUBLIC_IP>
admin#<PUBLIC_IP>: Permission denied (publickey).
Checking the permissions the nodes should be able to accept SSH connections and I can connect to the VMs using the GCloud UI.
What am I doing wrong? Thanks!

The answer can be found here: https://github.com/kubernetes/kops/issues/10770

I've encounter the issue when I tested some scenarios with SSH keys (add, remove, overwrite, etc).
When you are logging to GKE console, your ssh keys are stored in ~/.ssh. If folder it's empty, those keys will be created ocne you will connect to VM (google_compute_engine and google_compute_engine.pub).
$ ls ~/.ssh
google_compute_engine google_compute_engine.pub google_compute_known_hosts known_hosts
Information about SSH Key is also stored in your project. You can find it in Navigation Menu > Compute Engine > Metadata. Next select SSH Keys tab to view instance SSH keys.
Additional information about SSH Keys can be found in Managing SSH keys in metadata guide.
If you will encounter this kind of issue, you can remove SSH key from UI, remove google_compute_engine and google_compute_engine.pub. While you want to SSH to machine, GKE will ask you to create new SSH key and issue with Permission denied (publickey) should be fixed.
Commands which should be used to ssh to GCE vm is gcloud ssh
gcloud compute ssh <instance-name> --zone <zone>
Why?
gcloud compute ssh is a thin wrapper around the ssh(1) command that takes care of authentication and the translation of the instance name into an IP address.
In addition, if you will encounter other SSH issues on GKE you can check Troubleshooting SSH guide.

Related

Get public key for new cert from k3s

So I got locked out of my Kubernetes instance, presumably due to a cert expiration. It was created with k3sup, by someone with a lot more Kubernetes experience than me.
To dig into the issue, I used AWS session manager to connect to the instance. When I ran sudo kubectl get pods -A from within the instance, I got the same error as I got from my local machine:
error: You must be logged in to the server (Unauthorized)
I then ran sudo systemctl restart k3s to restart the kubernetes, and which supposedly rotates the certs. Now kubectl commands work from within the container, which is great, but still not from my local machine.
If this did rotate the cert as I assume, I think I need the new public key for my local ~/.kube/config. Where do I get this?
I found the updated certs in the kube config at /etc/rancher/k3s/k3s.yaml

Run 'kubectl' commands from my localhost to GKE - but via tunnelling through a bastion host

Currently...
I have a GKE/kubernetes/k8s cluster in GCP. I have a bastion host (Compute Engine VM Instance) in GCP. I have allowlisted my bastion host's IP in the GKE cluster's Master authorized networks section. Hence, in order to run kubectl commands to my GKE, I first need to SSH into my bastion host by running the gcloud beta compute ssh command; then I run the gcloud container clusters get-credentials command to authenticate with GKE, then from there I can run kubectl commands like usual.
Later...
I want to be able to run kubectl commands to my GKE cluster directly from my local development CLI. In order to do that, I can add my local development machine IP as an allowlisted entry into my GKE's Master authorized networks, and that should be it. Then i can run the gcloud container clusters get-credentials first and then run kubectl commands like usual.
However...
I am looking for a way to avoid having to allowlist my local development machine IP. Every time i take my laptop somewhere new, i have to update the allowlist my new IP from there before i can run the gcloud container clusters get-credentials command before running kubectl commands.
I wonder...
Is there a way to assign a port number in the bastion-host that can be used to invoke kubectl commands to the remote GKE cluster securely? And then, i can just use the gcloud compute start-iap-tunnel command (which BTW takes care of all permission issues using Cloud IAM) from my local dev CLI to establish a ssh-tunnel to that specific port number in the bastion host. That way, for the GKE cluster, it is receiving kubectl commands from the bastion host (which is already allowlisted in its Master authorized networks). But behind the scene, i am authenticating with the bastion host from my local dev CLI (using my glcoud auth credentails) and invoking kubectl commands from there securely.
Is this possible? Any ideas from anyone?
This would help accessing to your secured GKE cluster from localhost
https://github.com/GoogleCloudPlatform/gke-private-cluster-demo
Once the bastion host is setup with tinyproxy as in the above doc, we can use the below shell functions to quickly enable/disable the bastion host access
enable_secure_kubectl() {
# Aliasing kubectl/helm commands to use local proxy
alias kubectl="HTTPS_PROXY=localhost:8888 kubectl"
alias helm="HTTPS_PROXY=localhost:8888 helm"
# Open SSH tunnel for 1 hour
gcloud compute ssh my-bastion-host -- -o ExitOnForwardFailure=yes -M -S /tmp/sslsock -L8888:127.0.0.1:8888 -f sleep 3600
# Get kubernetes credentials with internal ip for kube-apiserver in kubeconfig
gcloud container clusters get-credentials my-gke-cluster --region us-east1 --project myproject --internal-ip
}
disable_secure_kubectl() {
unalias kubectl
unalias helm
ssh -S /tmp/sslsock -O exit my-bastion-host
}

ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1]

I'm just trying to move a simple text file from the local host to the remote host. I'm using Google's Cloud computing and more specifically, I'm using the gcloud command line tool. Here are the instructions and errors I received:
Admins-MacBook-Pro-4:downloads kylefoley$ gcloud compute scp lst_calc.txt instance-1:/home/kylefoley76/hey.txt
No zone specified. Using zone [us-central1-a] for instance: [instance-1].
Updating project ssh metadata...⠧Updated [https://www.googleapis.com/compute/v1/projects/atomic-drake-250022].
Updating project ssh metadata...done.
Waiting for SSH key to propagate.
Warning: Permanently added 'compute.1494876250178113937' (ECDSA) to the list of known hosts.
Enter passphrase for key '/Users/kylefoley/.ssh/google_compute_engine':
Enter passphrase for key '/Users/kylefoley/.ssh/google_compute_engine':
scp: /home/kylefoley76/hey.txt: Permission denied
ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
I then tried putting root# in front of the remote path and got the following error:
Admins-MacBook-Pro-4:downloads kylefoley$ gcloud compute scp lst_calc.txt root#instance-1:/home/kylefoley76/hey.txt
No zone specified. Using zone [us-central1-a] for instance: [instance-1].
Updating project ssh metadata...⠛Updated [https://www.googleapis.com/compute/v1/projects/atomic-drake-250022].
Updating project ssh metadata...done.
Waiting for SSH key to propagate.
Enter passphrase for key '/Users/kylefoley/.ssh/google_compute_engine':
root#35.193.247.37: Permission denied (publickey).
Enter passphrase for key '/Users/kylefoley/.ssh/google_compute_engine':
root#35.193.247.37: Permission denied (publickey).
Enter passphrase for key '/Users/kylefoley/.ssh/google_compute_engine':
root#35.193.247.37: Permission denied (publickey).
Enter passphrase for key '/Users/kylefoley/.ssh/google_compute_engine':
root#35.193.247.37: Permission denied (publickey).
Enter passphrase for key '/Users/kylefoley/.ssh/google_compute_engine':
It was then clear that the program was caught in an infinite loop of some kind.
UPDATE
Also, I want to make it clear that my problem is not a linux problem but a gcloud problem. A lot of people who have this problem recommend putting the files in the /tmp folder. On the remote Linux computer that I'm connected to it seems that I have all of the necessary permissions. I've created folders and files on this remote machine and I've moved the files around with terminal, so I think that rules out the possibility that my problem lies with the permissions of the Linux computer itself.
Create a tmp dir under your home in your instance with chmod 777 and send files to that.
gcloud compute scp ./app.tar.gz my-vm:~/tmp
Reason of the message:
This message means that the network connection from the client to the server is working, and that SSH is running. However, key-based authenticatication failed.
Troubleshooting steps:
Make sure that you have authenticated to gcloud as an IAM user with the compute instance admin role.
run gcloud auth login [IAM-USER] then try gcloud compute ssh again.
Verify that persistent SSH Keys metadata for gcloud is set for either the project or instance.
gcloud compute project-info describe --format flattened | grep
commonInstanceMetadata.items | grep ssh | grep -v expireOn
It's possible that you lost the private key, mismatched a keypair, etc. You can force gcloud to generate a new SSH keypair by doing the following:
If present, by moving ~/.ssh/google_compute_engine and ~/.ssh/google_compute_engine.pub. For example:
mv ~/.ssh/google_compute_engine.pub ~/.ssh/google_compute_engine.pub.old
mv ~/.ssh/google_compute_engine ~/.ssh/google_compute_engine.old
Try gcloud compute ssh [INSTANCE-NAME] again. A new keypair will be created and the public key will be added to the SSH keys metadata.
Verify that the Linux Guest Environment scripts are installed and
running. If the Linux Guest
Environment is not installed, re-install it.

Shell (ssh) into Azure AKS (Kubernetes) cluster worker node

I have a Kubernetes cluster in Azure using AKS and I'd like to 'login' to one of the nodes. The nodes do not have a public IP.
Is there a way to accomplish this?
The procedure is longly decribed in an article of the Azure documentation:
https://learn.microsoft.com/en-us/azure/aks/ssh. It consists of running a pod that you use as a relay to ssh into the nodes, and it works perfectly fine:
You probably have specified the ssh username and public key during the cluster creation. If not, you have to configure your node to accept them as the ssh credentials:
$ az vm user update \
--resource-group MC_myResourceGroup_myAKSCluster_region \
--name node-name \
--username theusername \
--ssh-key-value ~/.ssh/id_rsa.pub
To find your nodes names:
az vm list --resource-group MC_myResourceGroup_myAKSCluster_region -o table
When done, run a pod on your cluster with an ssh client inside, this is the pod you will use to ssh to your nodes:
kubectl run -it --rm my-ssh-pod --image=debian
# install ssh components, as their is none in the Debian image
apt-get update && apt-get install openssh-client -y
On your workstation, get the name of the pod you just created:
$ kubectl get pods
Add your private key into the pod:
$ kubectl cp ~/.ssh/id_rsa pod-name:/id_rsa
Then, in the pod, connect via ssh to one of your node:
ssh -i /id_rsa theusername#10.240.0.4
(to find the nodes IPs, on your workstation):
az vm list-ip-addresses --resource-group MC_myAKSCluster_myAKSCluster_region -o table
This Gist and this page have pretty good explanations of how to do it. Sshing into the nodes and not shelling into the pods/containers.
you can use this instead of SSH. This will create a tiny priv pod and use nsenter to access the noed.
https://github.com/mohatb/kubectl-wls

Kubernetes ssh into nodes not working in local

How to ssh to the node inside the cluster in local. I am using docker edge version which has kubernetes inbuilt. If i run
kubectl ssh node
I am getting
Error: unknown command "ssh" for "kubectl"
Did you mean this?
set
Run 'kubectl --help' for usage.
error: unknown command "ssh" for "kubectl"
Did you mean this?
set
There is no "ssh" command in kubectl yet, but there are plenty of options to access Kubernetes node shell.
In case you are using cloud provider, you are able to connect to nodes directly from instances management interface.
For example, in GCP: Select Menu -> Compute Engine -> VM instances, then press SSH button on the left side of the desired node instance.
In case of using local VM (VMWare, Virtualbox), you can configure sshd before rolling out Kubernetes cluster, or use VM console, which is available from management GUI.
Vagrant provides its own command to access VMs - vagrant ssh
In case of using minikube, there is minikube ssh command to connect to minikube VM. There are also other options.
I found no simple way to access docker-for-desktop VM, but you can easily switch to minikube for experimenting with node settings.
How to ssh to the node inside the cluster in local
Kubernetes is aware of nodes on level of secure communication with kubelets on nodes (geting hostname and ip from node), and as such, does not provide cluster-level ssh to nodes out of the box. Depending on your actual provide/setup there are different ways of connecting to nodes and they all boil down to locate your ssh key, open appropriate ports on firewall/security groups and issue ssh -i key user#node_instance_ip command to access node. If you are running locally with virtual machines you can setup your own ssh keypairs and do the trick..
You can effectively shell into a pod using exec(I know its not exactly what the question asks, but might be helpful).
An example usage would be kubectl exec -it name-of-your-pod -- /bin/bash. assuming you have bash installed.
Hope that helps.
You have to first Extend kubectl with plugins adding https://github.com/luksa/kubectl-plugins.
Basically, to "install" ssh, e.g.:
wget https://raw.githubusercontent.com/luksa/kubectl-plugins/master/kubectl-ssh
Then make sure the file is in kubectl-ssh your path.