how does ssh connect to master node on GCP? - kubernetes

I got the problem about how to connect to the master node on GCP. I want to build a directory in this master node,so I need to connect to it via ssh or any other method.But I found so many pages on browsers ,they can't work for me . This is following situation I met:
enter image description here
Above is the terminal I entered command on GCP VM . when I wanted to ssh into this master node , I got the permission error. Could anyone help me ?I will appreciate with you very much.

i think it's due to the Wrong SSH key or Default key permission being denied.
If that node is running into the GCP itself you can go to the instances section and there will be an option to SSH directly instead of running the command from the browser shell.
Go to instances to list all running instances and find the one and click on the SSH button it will do the SSH for you without the key.

Related

Need help in solving Remote - SSH problem

Hello guys I am trying to do Remote SSH for quite some time and unable to make it work .
The problem :
Trying to connect to Aws Codecatalyst Dev environment Linux(remote ssh) to my local windows machine and it is in an endless loop of " Setting up SSH Host aws -exampleID".
I had tried it with my remote ubuntu aws workspace and it works flawlessly (ubuntu remote workspace to Aws codecatalyst Dev Environment) so it seems like some issue with my local machine .
Things I have tried:
When comparing the logs of failed vs successful I have noted that the remote.SSH.useLocalServer was true in my success log. So tried to add this to the settings.json but during run time it automatically changes from true to false .
Also tried clean uninstall and install couple of times (remove appData and ssh config files)
I have added an image of log of the failed Remote SSH .
Would appreciate if Remote SSH users can take a look into the logs and give their insights .
log of Remote SSH

How can I use remote SSH to connect to my university cluster throush a jump machine only to choose login node?

My university cluster has a jump machine to control direct connection to the login node, we need to manually choose a login node when use ssh command to connect to it.
We are not allowed to read or write anything on the jump machine, only command to choose which node we want to login.
I want to know how can I use VS Code remote ssh to connect to the login node in this case, because when I tried to follow the ProxyCommand guide, it can't work.
Config:
Error message:
Logemphasized textin:

How can I mount a remote server just like raspbian does it?

I can manually connect to a remote server by going to the File Manager and select "Connect to Server" under the "Go" tab.
Then I can type in the ip and username and I am ready to go. And I dont need to add a password because I have set up ssh-keys.
As shown in the images below:
After that I can access the remote folder at this location:
/run/user/1000/gvfs/sftp:host=192.168.178.35,user=user/
My question is, how can I achieve the same result, but without the UI, just with the CLI?
(What command is used by the raspberry OS?)
And further, how can I unmount the volume after using it with the CLI?
Thanks for the help : )
You can simply type the following command in the console to connect to the remote server. When you are finished, you can close the connection with the command exit.
ssh username#address
You can find a very detailed description here.
So looking at the comment you wrote in the other answer, I think you're looking for an SFTP server, this is basically a server where you can manipulate directories and the files in them remotely by mounting it.
The image you showed however, does show an SSH server, as #flaxel said in his/her answer.

Kubernetes ssh into nodes not working in local

How to ssh to the node inside the cluster in local. I am using docker edge version which has kubernetes inbuilt. If i run
kubectl ssh node
I am getting
Error: unknown command "ssh" for "kubectl"
Did you mean this?
set
Run 'kubectl --help' for usage.
error: unknown command "ssh" for "kubectl"
Did you mean this?
set
There is no "ssh" command in kubectl yet, but there are plenty of options to access Kubernetes node shell.
In case you are using cloud provider, you are able to connect to nodes directly from instances management interface.
For example, in GCP: Select Menu -> Compute Engine -> VM instances, then press SSH button on the left side of the desired node instance.
In case of using local VM (VMWare, Virtualbox), you can configure sshd before rolling out Kubernetes cluster, or use VM console, which is available from management GUI.
Vagrant provides its own command to access VMs - vagrant ssh
In case of using minikube, there is minikube ssh command to connect to minikube VM. There are also other options.
I found no simple way to access docker-for-desktop VM, but you can easily switch to minikube for experimenting with node settings.
How to ssh to the node inside the cluster in local
Kubernetes is aware of nodes on level of secure communication with kubelets on nodes (geting hostname and ip from node), and as such, does not provide cluster-level ssh to nodes out of the box. Depending on your actual provide/setup there are different ways of connecting to nodes and they all boil down to locate your ssh key, open appropriate ports on firewall/security groups and issue ssh -i key user#node_instance_ip command to access node. If you are running locally with virtual machines you can setup your own ssh keypairs and do the trick..
You can effectively shell into a pod using exec(I know its not exactly what the question asks, but might be helpful).
An example usage would be kubectl exec -it name-of-your-pod -- /bin/bash. assuming you have bash installed.
Hope that helps.
You have to first Extend kubectl with plugins adding https://github.com/luksa/kubectl-plugins.
Basically, to "install" ssh, e.g.:
wget https://raw.githubusercontent.com/luksa/kubectl-plugins/master/kubectl-ssh
Then make sure the file is in kubectl-ssh your path.

Python fabric with EC2 instance

I am working on setting up auto deployment script using Python Fabric on EC2 instance. We are already having code repositories cloned on EC2 instance with HTTPS (without user name,https://bitbucket.org/) instead of SSH.
If we clone the repositories using SSH, it will solve my problem for now. But, I just wanted to know if following is possible:-
After connecting to remote EC2 instance using Fabric, if my next command is hg clone, it asks for user name and password. I have to type this two things manually on command prompt.
Is there any way we can pass these values run time automatically?
Thanks!
You can use ssh keys and connect using the ssh+hg protocol. It'll auth w/o password if you set that up.