Cannot access remote jupyter instance via browser when running it as a user service through systemd - jupyter

I have a jupyter instance running on a remote server in aws.
When I try to access it from my local computer via the browser I always get the following:
I've tried multiple different browsers and it's the same thing.
But it get's even stranger, if I fire up a terminal and just ssh into the remote aws server, nothing else, now all of a sudden I can access the jupyter instance from my local computer via the browser by just visiting the url of the notebook.
Any idea what the heck is going on here?
Here's a more detailed description. We have two machines A (local) and B (remote). On machine B jupyter-lab is installed using conda.
In order to access jupyter-lab from my local machine A I simply start jupyter-lab on machine B on port 80 and then all I have to do is to visit the url public ip/domain name of machine B in the browser of machine A.
No need for ssh tunneling cause machine B has a public ip and a domain name associated with it, e.g. machineB.aws.com:80 points to the jupyter-lab instance running on machine B.
Now the bizarre thing in all this is that visiting the url machineB.aws.com:80 from the browser in machine A always gives the error "The site can't be reached", unless I simply ssh from machine A into machine B, then the site is reachable and the url machineB.aws.com:80 works fine.
Again, no ssh tunneling going on here, simply ssh from A --> B makes the site reachable?
Clarification
This issue is being caused by the fact that I have configured jupyter-lab to run as user service via systemd. According to the wiki This process will survive as long as there is some session for that user, and will be killed as soon as the last session for the user is closed. When #Automatic start-up of systemd user instances is enabled, the instance is started on boot and will not be killed. If I'm not mistaken I have configured the user service to start-up automatically upon each boot of the computer. Which makes me a bit skeptical why process is being killed when no user is logged in?
Here's the systemd unit file configuration for jupyter:
[Unit]
Description=Jupyter Lab
[Service]
Type=simple
ExecStart=/home/user/anaconda3/envs/myenv/bin/jupyter-lab
WorkingDirectory=/home/user/
Restart=always
RestartSec=120
[Install]
WantedBy=default.target

It seems that I might have forgotten to issue the following command:
loginctl enable-linger username, which is necessary for a systemd user process to run without a user session on startup and to keep running even after a user session has been closed. This is mentioned in the wiki and also mentioned here, here and here

Related

Unable to run the test for the webapps running in cloud/remote server ( application is accessible via browser only through VPN). Getting error

Unable to run the test for the webapps running in cloud ( application is accessible via browser only through VPN). Getting error.
You must first setup a tunnel from testRigor to your machine, then try accessing the page.
Read the tunnel documentation.
Set up a tunnel on your machine.
Request a port from testRigor support
Then run your test
Docs for setting up a tunnel:
https://docs.google.com/document/d/1MMQ9WBwRTSPUI589PKv5YxiyM7r5rE1K5iP_Dj5xl0Q/edit#

SSH Bastion software to get interactive shell into kubernetes container

I'm pretty new to Kubernetes, I'm making a labs/CTF project and I use Kubernetes for managing the deployment of the containers between multiples hosts etc.
What I'd like to do, is allow the users to connect (i.e., getting a shell) to the lab they deployed, without installing any additional software. I think the easiest way to do that is to use an SSH Bastion/Proxy that could allow me to set a random password (sent to the user) and redirect the traffic from the user using SSH to the container, using either the native interactive shell feature or another ssh server inside the "entrypoint" container.
Here is what I'm expecting to do: user1 deploys the lab X, pods corresponding to this lab are deployed on Kubernetes, the user gets back a random password, he/she is now able to connect to the "entrypoint" container in the lab he/she deployed by simply using ssh user1#LABS and entering the random password.

Is it possible to SSH into a Virtual Machine instance using google-api-client rather than command line

I want to automate the entire process from starting an instance to running a program on that instance.
So just as running a python program on a local computer requires only one command on command line, so too would I like to run my program on a remote VM instance with just one command.
It seems though that in order to SSH into a remote VM instance I have to use command line and I have to answer some yes/no questions or multiple choice questions. Admittedly you can use the sub process module but I have not yet figure out how to answer the yes/no questions.
Before I do more research however, I need to know if what I'm doing is even possible. So I would like to build a python program using google-api-client which automates the entire process from starting the instance to connecting the instance to a drive, to running a program.
It seems though I cannot SSH into a remote VM instance with python but have to do this with command line. Is this right?
You can use ssh from Python (link).
Minimally, any ssh client will require the IP address of the remote machine (running sshd), the port on which sshd is listening and some credentials that authenticate the client to the remote machine (generally you'll want to use ssh-keys).
Google's SDKs (including the API Client Libraries for Python) help you interact with Google's services. You can use Google's Compute Engine API (and Python library) to provision (create) VMs including Linux VMs. You'll need to ensure the image you use runs sshd and the machine will need a (probably public) IP address. Once the command succeeds, you have a Linux VM mostly like any other (created on GCP, AWS, Azure etc.). You can query Compute Engine for the public IP address of an instance.
As long as you've ssh'd into a Google Compute Engine instance using gcloud compute ssh ... from your machine (any project), the command will have created a ssh key-pair for you that will be copied to your instances upon creation. You can then use the private key (.ssh/google_compute_engine) to authenticate your Python ssh client to the Compute Engine instance. You'll need to provide the IP address to the client too (it'll default to port :22).
NB gcloud compute ssh ... uses your machine's ssh client. It does not reimplement ssh. You can prove this to yourself by running gcloud compute ssh --ssh-flag="-vvv" or by temporarily making which ssh inaccessible to the gcloud command and trying gcloud compute ssh ... when which ssh is inaccessible to it; it won't work.
It seems though I cannot SSH into a remote VM instance with python but have to do this with command line. Is this right?
Yes, this is correct. You can automate to a point but not all the process as have you described.
One alternative way that I can think of is to use OS Login which manages the SSH access to your VM instances using IAM without having to create and manage individual SSH keys.

Deploy python API on Amazon EC2 Instances

I have created a script which is running on a localhost and port:5006 on EC2 instance. I am planning to make it run in the background even after I logout from SSH terminal. The question is that when I try to throw or reach to my script from my browser or Postman with the following link:
http://ec2-52-15-176-255.us-east-2.compute.amazonaws.com:5006/main?<myparametes>
Steps which I have done is:
1.) Created EC2 Instances of Linux flavor (Which is available in Free tier)
2.) Started the python script in virtualenv folder and listening to the port,
3.) Now trying to reach the IP and the port as mentioned above!
Apart from that, I haven't done anything else!
Please help me with understanding the concepts because there are no such tutorials available which serve straightforwardly
Agreed that the question is very broad, so starting from a basic step: have you created the proper security group for your instance that allows this 5006 port to be accessible from whichever network you are trying to?

Linux/CMD environment and terminal on website

I am looking for a way to incorporate a command line interface into my website. Specifically I have 2 servers, one running Linux distro and the other Windows. People can request accounts and if I approve them they get a user partition on either of the servers.
They can then sign in on the website and access the servers through a command line interface. I saw a couple of repos that do something similar for the Amazon EC2 servers but was wondering if there is anything more general?
You can use shellinabox. This runs a daemon on the server and can be accessed through a specified port. You simply have to enter the IP of your server and the port number and you can log in over a browser.