logging into a specific hpc node from jupyter notebook in VS code - visual-studio-code

I work on an HPC where we have a login node to login and then we can ask for a specific amount of computing resources which will then be allocated on compute node. We cannot run our programs in login node since it is shared. Currently, if we want to run jupyter on the compute node, we have to ssh into the compute node and forward the port.
Is there any way to ssh into compute node so that we can run the jupyter notebook from vs code itself? If I run it directly it will run in login node which is a problem.

You can ssh into a compute node that is accessible through a login node by setting up your VSCode ssh config file such that your login node is a ProxyJump and your compute node the host you want to ssh to.
If you would log in to your login node as ssh username#ip.of.login.node, and from the login node, you can ssh to the compute node as ssh ip.of.compute.node, then you can set up your config file as such:
Host loginnode
HostName ip.of.login.node
User meulemeester
Host computenode
HostName ip.of.compute.node
User meulemeester
ProxyCommand ssh -vv -W %h:%p <ip.of.login.node>
# -W flag is necessary to redirect stdin and stdout
# %h:%p is hostname and portname. Host refers to ProxyJump (i.e. loginnode), port is 22 by default
ProxyJump loginnode
Make sure that this config file is the file used when running ssh. Check the VSCode setting Remote.SSH: config file to see if it points to this config file. Instead of using the IPs fo the login node or compute node, you can also use the host names directly (i.e. anything you would put after the # when ssh'ing).
Depending on the authorisation methods, you may want to add additional parameters to the config file. The given setup works if the host has the public key of the local machine stored under ~/.ssh/authorized_keys.
The compute node should now be available as an option when you want to connect to a host in VSCode.

Related

How can I use remote SSH to connect to my university cluster throush a jump machine only to choose login node?

My university cluster has a jump machine to control direct connection to the login node, we need to manually choose a login node when use ssh command to connect to it.
We are not allowed to read or write anything on the jump machine, only command to choose which node we want to login.
I want to know how can I use VS Code remote ssh to connect to the login node in this case, because when I tried to follow the ProxyCommand guide, it can't work.
Config:
Error message:
Logemphasized textin:

Unable to access application through minikube tunnel

I'm currently using minikube and I'm trying to access my application by utilizing the minikube tunnel since the service type is LoadBalancer.
I'm able to obtain an external IP when I execute the minikube tunnel, however, when I try to check it on the browser it doesn't work. I've also tried Postman and curl, they both don't work.
To add to this, if I shell into the pod I can use curl and it does work. Furthermore, I executed kubectl port-forward and I was able to access my application through localhost.
Does anyone have any idea as to why I'm not being able to access my application even though everything seems to be running correctly?
Your service is probably bound to localhost. Minikube starts the cluster in a VM or docker (depending on the driver you are using) that is bound to an external IP, $(minikube ip).
When you are running a minikube tunnel you're tunneling from minikube cluster external IP to the internal IP of the load balancer, the LB service in Kubernete the External IP goes from "Pending" to an actual internal IP and something like this should work:
curl -H 'Host: localhost' -v $(minikube ip)
However, it doesn't in the browser, since in the above command you are sending the request to the minikube's IP, not localhost. What I do for this to work is a ssh tunnel like this one:
ssh -i $(minikube ssh-key) docker#$(minikube ip) -L 8008:localhost:80
This maps the LB listener in port 80, in minikube's cluster, to 8008 in localhost. The external IP of the service remains pending but it works since the Kube controller can still find it. If you want to map port 80 then you will need to add sudo.
If the version of ssh on your system (the one in your path) is less than 8.0, 'minikube tunnel' will silently fail to instantiate the ssh tunnel for some port forwards. (e.g. privileged ports)
Open a command prompt as administrator, and type 'where.exe ssh'. Navigate to that location in windows explorer, and right-click on 'ssh.exe'. Choose Properties->Details to see the version.
If this is less than version 8.0 you must upgrade that to at least version 8.0 to prevent this silent failure of ssh by 'minikube tunnel'.
After upgrading, ssh, ensure that the newer version is the one that will be executed by using the 'where.exe' command again. If there are two on your system, then reorder the paths in your path environment variable. Restart your shell (or better) reboot the system so that all processes environments pick up the path changes.
Then try 'minikube tunnel' again. When it is working, you should see an ssh instance in the task manager for each tunnel that minikube creates.
In my case minikube service <serviceName> solved this issue.
For further details look here in minikube docs.

Kubernetes ssh into nodes not working in local

How to ssh to the node inside the cluster in local. I am using docker edge version which has kubernetes inbuilt. If i run
kubectl ssh node
I am getting
Error: unknown command "ssh" for "kubectl"
Did you mean this?
set
Run 'kubectl --help' for usage.
error: unknown command "ssh" for "kubectl"
Did you mean this?
set
There is no "ssh" command in kubectl yet, but there are plenty of options to access Kubernetes node shell.
In case you are using cloud provider, you are able to connect to nodes directly from instances management interface.
For example, in GCP: Select Menu -> Compute Engine -> VM instances, then press SSH button on the left side of the desired node instance.
In case of using local VM (VMWare, Virtualbox), you can configure sshd before rolling out Kubernetes cluster, or use VM console, which is available from management GUI.
Vagrant provides its own command to access VMs - vagrant ssh
In case of using minikube, there is minikube ssh command to connect to minikube VM. There are also other options.
I found no simple way to access docker-for-desktop VM, but you can easily switch to minikube for experimenting with node settings.
How to ssh to the node inside the cluster in local
Kubernetes is aware of nodes on level of secure communication with kubelets on nodes (geting hostname and ip from node), and as such, does not provide cluster-level ssh to nodes out of the box. Depending on your actual provide/setup there are different ways of connecting to nodes and they all boil down to locate your ssh key, open appropriate ports on firewall/security groups and issue ssh -i key user#node_instance_ip command to access node. If you are running locally with virtual machines you can setup your own ssh keypairs and do the trick..
You can effectively shell into a pod using exec(I know its not exactly what the question asks, but might be helpful).
An example usage would be kubectl exec -it name-of-your-pod -- /bin/bash. assuming you have bash installed.
Hope that helps.
You have to first Extend kubectl with plugins adding https://github.com/luksa/kubectl-plugins.
Basically, to "install" ssh, e.g.:
wget https://raw.githubusercontent.com/luksa/kubectl-plugins/master/kubectl-ssh
Then make sure the file is in kubectl-ssh your path.

Move mongo database from vagrant outside of VM

Hello guys I'm trying to find a way to move my mongo database inside vagrant outside of it. I'm reading some posts in this forum but they're related to postgres and mysql.
When I run npm start this is the code I have in my package.json
"start": "MONGODB=mongodb://localhost:27017....
So the problem is that the databse will get saved in Virtual Machine localhost, so, by the time it runs it won't be accessible outside of VM. How can I change this localhost path to communicate outside?
It is not different wether it is vagrant or another server.
The db location files are specified in /etc/mongodb.conf. By default db are saved in /data/db
So the problem is that the databse will get saved in Virtual Machine localhost, so, by the time it runs it won't be accessible outside of VM. How can I change this localhost path to communicate outside?
If you want the db to be accessible from your host machine you need to replace localhost by the IP of the vagrant VM (if you specified a private IP) or better use the 0.0.0.0 so its accessible from all network interfaces
I did it, this link gave me the answer: Vagrant reverse port forwarding?
It seems that by default mongo will be located in 10.0.2.2 outside of vagrant, so if I run inside vagrant: mongo 10.0.2.2:27017 it connects to my databases outside of vagrant.
Therefore, this is what I need to put in my package.json to run npm start...
"start": "MONGODB=mongodb://10.0.2.2:27017/

How to use the "Remote Systems" view in Eclipse to explore a Docker container file system?

The Eclipse Remote Systems view is a great tool to connect to VMs and explore their file systems, currently the following options are available:
First I find out the container IP by running this command:
docker inspect <container> | grep IPAddress | cut -d '"' -f 4
Once I have the IP, I launch the New Connection wizard from the Remote Systems view, I tried to select Linux, SSH only and FTP only and in the Hostname field I paste the container IP, click Finish and the connection seems to be successfully created, now when I try to expand the the Files node it prompts for User and Password, the problem is that I don't have that info, does the user/pass vary from container to container? how can I get this info?
You can just instantiate a container with that image but with a shell so that you can see what usernames are configured on that image.
docker run -it node /bin/bash
You can then configure users, password and do a:
docker commit <image-name> my-node:0.1
Then you can instantiate a new container:
docker run -d -p 80:9080 -p 443:9443 my-node
Is ssh also running in that container? If not you will have to install it into the container so that you can ssh to it.
A docker container only runs a single parent process at a time (on your host machine that parent process is 'init' which runs a bunch of system services). In the case of your node container, that parent process is a node server.
Eclipse connects to a remote machine by connecting to a listener on that machine using some protocol. SSH of FTP, for example. With the docker container, there is no process listening for this connection, so you cannot connect using Eclipse as it is. You have two options...
Use the command line and docker exec to connect to the machine and explore its filesystem. No pretty pictures, but you don't need a lot of knowledge.
Modify your container in some way to connect to it. you have two options here...
A. Modify your image to run an SSH daemon. A simple way to do that is to use the phusion/baseimage container as your parent, and have it spawn both the ssh daemon and the node server. You need to know a good amount about linux sysadmin to get this working (not a lot, but a good amount).
B. Launch a second copy of the container with a different command, such as ssh -d. You can then connect to the second copy. This has the downside that it won't be the same container you're interested in, and you STILL have to modify the image since I doubt the node image even has an ssh daemon installed... but it is less knowledge than wrapping your head around runit.