I have a google cloud compute instance that I connect to with
INSTANCE_NAME='sam_vm'
gcloud beta compute ssh --zone "us-central1-a" $INSTANCE_NAME --project sam_project
when I try to connect with cmd-p then typing in that command, I get:
ssh: Could not resolve hostname gcloud beta compute ssh --zone "us-central1-a" "shleifer-v1-vm" --project $hf_proj: nodename nor servname provided, or not known
How can I connect?
run
gcloud compute config-ssh
then in the vscode remote-ssh popup,
ssh {instance_name}.{zone}.{project_name}
Even though it has been 2 years since this questions has been asked, I am providing instructions for those who are still trying to figure out how to set up ssh to GCP in VSCode on macOS.
Once you have Google Cloud CLI installed and have been granted access to the VM instance, login to your Google Cloud account by running in terminal:
gcloud auth login
Setup following variables which will be used in gcloud commands:
gcloud config set project <your project name>
gcloud config set compute/zone <your GCP zone>
Then you can establish SSH tunnel between VM and your local machine:
gcloud compute ssh [YOUR-VM-NAME] --tunnel-through-iap
Setup VS Code and install Remote Development extension.
Run the following command in terminal to obtain internal SSH command used by gcloud compute ssh to connect to VM:
gcloud compute ssh [YOUR-VM-NAME] --tunnel-through-iap --dry-run
The output may look similar to following:
/usr/bin/ssh -t -i /Users/<User_ID>/.ssh/google_compute_engine -o CheckHostIP=no
-o HashKnownHosts=no -o HostKeyAlias=compute.8972912327831725343 -o IdentitiesOnly=yes
-o StrictHostKeyChecking=yes
-o UserKnownHostsFile=/Users/<User_ID>/.ssh/google_compute_known_hosts
-o "ProxyCommand /Library/Frameworks/Python.framework/Versions/3.10/bin/python3
/Library/google-cloud-sdk/lib/gcloud.py compute start-iap-tunnel <instance name>
%p --listen-on-stdin --project=<your project name> --zone=<your zone> --verbosity=warning"
-o ProxyUseFdpass=no <Gcloud_Username>#compute.8972912327831725343
Copy this SSH command and remove ‘/usr/bin/’:
ssh -t -i /Users/<User_ID>/.ssh/google_compute_engine -o CheckHostIP=no
-o HashKnownHosts=no -o HostKeyAlias=compute.8972912327831725343 -o IdentitiesOnly=yes
-o StrictHostKeyChecking=yes
-o UserKnownHostsFile=/Users/<User_ID>/.ssh/google_compute_known_hosts
-o "ProxyCommand /Library/Frameworks/Python.framework/Versions/3.10/bin/python3
/Library/google-cloud-sdk/lib/gcloud.py compute start-iap-tunnel <instance name>
%p --listen-on-stdin --project=<your project name> --zone=<your zone> --verbosity=warning"
-o ProxyUseFdpass=no <Gcloud_Username>#compute.8972912327831725343
In VS Code:
Press 'Cmd' + 'Shift' + 'P'
Select 'Remote-SSH: Add New Host'
Press 'Enter'
Select the config file /Users/<User_ID>/.ssh/config
Press 'Cmd' + 'Shift' + 'P'
Select 'Remote-SSH: Connect Host'
Select Compute instance as Linux.
VS Code should establish SSH tunnel to the VM and you should be able to open root folder.
Hope that this will help someone save hours of efforts.
First execute gcloud compute config-ssh
Open ~/.ssh/config file
Find
Host {instance_name}.{zone}.{project_name}
HostName {ip_name_google_provided}
IdentityFile /Users/youruser/.ssh/google_compute_engine
UserKnownHostsFile=/Users/youruser/.ssh/google_compute_known_hosts
HostKeyAlias=compute.123423
IdentitiesOnly=yes
CheckHostIP=no
and convert to
Host {ip_name_google_provided}
HostName {ip_name_google_provided}
IdentityFile /Users/youruser/.ssh/google_compute_engine
UserKnownHostsFile=/Users/youruser/.ssh/google_compute_known_hosts
HostKeyAlias=compute.123423
IdentitiesOnly=yes
CheckHostIP=no
Simply use ip ({ip_name_google_provided}) at HostName as Host name and try to connect {ip_name_google_provided} as a solution.
Note: Problem is computer can not resolve DNS {instance_name}.{zone}.{project_name} from ssh config file.
Related
I'm trying to move to windows PowerShell instead of cmd
One of the commands I run often is for connecting to GCP compute engines using ssh and binding the machine's ports to my local machine.
I use the following template (taken from GCP's docs):
gcloud compute ssh VM_NAME --project PROJECT_ID --zone ZONE -- -L LOCAL_PORT:localhost:REMOTE_PORT -- -L LOCAL_PORT:localhost:REMOTE_PORT
This works great when using cmd but when I try and run it in PowerShell I get the following error:
(gcloud.compute.ssh) unrecognized arguments:
-L
8010:localhost:8888
What am I missing?
I am deploying a 2-node Kubernetes cluster on GCloud using kops with the following commands:
kops create cluster part1.k8s.local --zones europe-west3-a --node-count 2 --node-image ubuntu-os-cloud/ubuntu-2004-focal-v20210129 --node-size "e2-standard-2" --ssh-public-key ~/.ssh/id_rsa.pub --state ${KOPS_STATE_STORE}/ --project=${PROJECT}
kops update cluster --name part1.k8s.local --yes --admin
I then wait for the cluster to be ready and get the external IP of one of the nodes using:
kubectl get nodes -o wide
However when I try to login to the node I get:
ssh -i ~/.ssh/id_rsa admin#<PUBLIC_IP>
admin#<PUBLIC_IP>: Permission denied (publickey).
Checking the permissions the nodes should be able to accept SSH connections and I can connect to the VMs using the GCloud UI.
What am I doing wrong? Thanks!
The answer can be found here: https://github.com/kubernetes/kops/issues/10770
I've encounter the issue when I tested some scenarios with SSH keys (add, remove, overwrite, etc).
When you are logging to GKE console, your ssh keys are stored in ~/.ssh. If folder it's empty, those keys will be created ocne you will connect to VM (google_compute_engine and google_compute_engine.pub).
$ ls ~/.ssh
google_compute_engine google_compute_engine.pub google_compute_known_hosts known_hosts
Information about SSH Key is also stored in your project. You can find it in Navigation Menu > Compute Engine > Metadata. Next select SSH Keys tab to view instance SSH keys.
Additional information about SSH Keys can be found in Managing SSH keys in metadata guide.
If you will encounter this kind of issue, you can remove SSH key from UI, remove google_compute_engine and google_compute_engine.pub. While you want to SSH to machine, GKE will ask you to create new SSH key and issue with Permission denied (publickey) should be fixed.
Commands which should be used to ssh to GCE vm is gcloud ssh
gcloud compute ssh <instance-name> --zone <zone>
Why?
gcloud compute ssh is a thin wrapper around the ssh(1) command that takes care of authentication and the translation of the instance name into an IP address.
In addition, if you will encounter other SSH issues on GKE you can check Troubleshooting SSH guide.
I am trying to set up datalab from my chrome book using the following tutorial https://cloud.google.com/dataproc/docs/tutorials/dataproc-datalab. However when trying to set up an SSH tunnel using the following guidelines https://cloud.google.com/dataproc/docs/concepts/accessing/cluster-web-interfaces#create_an_ssh_tunnel I keep on receiving the following error.
ERROR: (gcloud.compute.ssh) Could not fetch resource:
- Project 57800607318 is not found and cannot be used for API calls. If it is recently created, enable Compute Engine API by visiting https://console.developers.google
.com/apis/api/compute.googleapis.com/overview?project=57800607318 then retry. If you enabled this API recently, wait a few minutes for the action to propagate to our sy
stems and retry.
The error message would lead me to believe my "Compute Engine API" is not enabled. However, I have double checked and "Compute Engine API" is enabled.
Here is what I am entering into the cloud shell
gcloud compute ssh ${test-cluster-m} \
--project=${datalab-test-229519} --zone=${us-west1-b} -- \
-4 -N -L ${8080}:${test-cluster-m}:${8080}
The ${} is for accessing the local environment variable. You set them in the step before with:
export PROJECT=project;export HOSTNAME=hostname;export ZONE=zone;PORT=number
In this case would be:
export PROJECT=datalab-test-229519;export HOSTNAME=test-cluster-m;export ZONE=us-west1-b;PORT=8080
Either try this:
gcloud compute ssh test-cluster-m \
--project datalab-test-229519 --zone us-west1-b -- \
-D 8080 -N
Or access the enviroment variables with:
gcloud compute ssh ${HOSTNAME} \
--project=${PROJECT} --zone=${ZONE} -- \
-D ${PORT} -N
Also check the VM you are trying to access is running.
I have a Kubernetes cluster in Azure using AKS and I'd like to 'login' to one of the nodes. The nodes do not have a public IP.
Is there a way to accomplish this?
The procedure is longly decribed in an article of the Azure documentation:
https://learn.microsoft.com/en-us/azure/aks/ssh. It consists of running a pod that you use as a relay to ssh into the nodes, and it works perfectly fine:
You probably have specified the ssh username and public key during the cluster creation. If not, you have to configure your node to accept them as the ssh credentials:
$ az vm user update \
--resource-group MC_myResourceGroup_myAKSCluster_region \
--name node-name \
--username theusername \
--ssh-key-value ~/.ssh/id_rsa.pub
To find your nodes names:
az vm list --resource-group MC_myResourceGroup_myAKSCluster_region -o table
When done, run a pod on your cluster with an ssh client inside, this is the pod you will use to ssh to your nodes:
kubectl run -it --rm my-ssh-pod --image=debian
# install ssh components, as their is none in the Debian image
apt-get update && apt-get install openssh-client -y
On your workstation, get the name of the pod you just created:
$ kubectl get pods
Add your private key into the pod:
$ kubectl cp ~/.ssh/id_rsa pod-name:/id_rsa
Then, in the pod, connect via ssh to one of your node:
ssh -i /id_rsa theusername#10.240.0.4
(to find the nodes IPs, on your workstation):
az vm list-ip-addresses --resource-group MC_myAKSCluster_myAKSCluster_region -o table
This Gist and this page have pretty good explanations of how to do it. Sshing into the nodes and not shelling into the pods/containers.
you can use this instead of SSH. This will create a tiny priv pod and use nsenter to access the noed.
https://github.com/mohatb/kubectl-wls
I have followed the docker-compose concourse installation set up
Everything is up and running but I cant figure out what to use as --tsa-host value in command to connect worker to TSA host
Would be worth mentioning that docker concourse web and db are running on the same machine that I hope to use as bare metal worker.
I have tried 1. to use IP address of concourse web container but no joy. I cannot even ping the docker container IP from host.
1.
sudo ./concourse worker --work-dir ./worker --tsa-host IP_OF_DOCKER_CONTAINER --tsa-public-key host_key.pub
--tsa-worker-private-key worker_key
I have also tried using the 2. CONCOURSE_EXTERNAL_URL and 3. the ip address of the host but no luck either.
2.
sudo ./concourse worker --work-dir ./worker --tsa-host http://10.XXX.XXX.XX:8080 --tsa-public-key host_key.pub
--tsa-worker-private-key worker_key
3.
sudo ./concourse worker --work-dir ./worker --tsa-host 10.XXX.XXX.XX:8080 --tsa-public-key host_key.pub
--tsa-worker-private-key worker_key
Other details of setup:
Mac OSX Sierra
Docker For Mac
Please confirm you use the internal IP of the host, not public IP, not container IP.
--tsa-host <INTERNAL_IP_OF_HOST>
If you use docker-compose.yml as in its setup document, you needn't care of TSA-HOST, the environmen thas been defined already
CONCOURSE_TSA_HOST: concourse-web
I used the docker-compose.yml recently with the steps described here https://concourse-ci.org/docker-repository.html .
Please confirm that there is a keys directory next to the docker-compose.yml after you executed the steps.
mkdir -p keys/web keys/worker
ssh-keygen -t rsa -f ./keys/web/tsa_host_key -N ''
ssh-keygen -t rsa -f ./keys/web/session_signing_key -N ''
ssh-keygen -t rsa -f ./keys/worker/worker_key -N ''
cp ./keys/worker/worker_key.pub ./keys/web/authorized_worker_keys
cp ./keys/web/tsa_host_key.pub ./keys/worker
export CONCOURSE_EXTERNAL_URL=http://192.168.99.100:8080
docker-compose up