I have a few endpoints that I would like to connect to over SSH for things like Powershell Remoting so that I can run scripts for automation.
How could I centralize access?
Endpoints would connect over SSH to a central server
Endpoints would keep this SSH connection open
I would need to create universal access over SSH to the endpoints that are terminating at the server. By universal access, I mean devices on the same network would ideally be able to utilize these open SSH connections for whatever SSH purpose.
I am a bit confused about step 3.
Would I only be able to perform remote actions and connect over SSH by running Powershell, PuTTY, Remote Desktop etc. on the central server?
Would I be able to 'bring in' the SSH connections via this SSH server onto the general network, such that devices on the same network can connect to the systems over SSH through the central server?
Is there a better way to have multiple endpoints connect over SSH to a central server for running Powershell, running Remote Desktop or other SSH items from devices on the same network?
Devices could be Windows or Linux
I'm guessing:
Each of the endpoints from multiple remote networks connect over SSH to the SSH server on Lan X
Devices on Lan X would need to SSH into the SSH Server on Lan X
Once SSHed into the SSH Server, the devices on Lan X could now run Powershell scripts and other SSH items on the endpoints
Related
question:
Is there any way to tunnel all outgoing ssh connections in vscode (including those established by the remote-ssh plugin) through my on-site workstation? I have full control over the firewall for that machine and can open ports on ufw as needed for off-site access.
background:
I use vscode remote-ssh to connect to a research computing cluster when on-site.
For remote work, I would like to avoid using cisco anyconnect as a vpn in mac os 11.6, as routing and other os features behave unexpectedly.
Turns out in mac os, it's sufficient to edit the ~/.ssh/config file by specifying an on-site proxy host that I control within the ProxyCommand option:
Host clusterNode
ProxyCommand ssh me#my_accessible_ssh_host nc %h %p
HostName <firewalled node ip address>
User my_cluster_username
Hi I want to connect to remote MongoDB server without using SSH tunneling(Both the client and server systems will run on windows OS). Is there any way to connect.
I have one server machine where PostgreSQL and SSH server is installed. I have another client machine from where i want to connect to the PostgreSQL on the server machine in a secure way. I used SSH tunnel which is working.
I tried to connect the client with server using:
$ ssh -L 3307:localhost:3306 user#Host -N -f
It is working. But now I am thinking whether it is possible to start the ssh tunneling from the server side. It means run a ssh command on server machine so that I get a more secure connection
I have started an ubuntu instance on AWS EC2
e.g. [ec2-user#ip-XXX-XX-XX-XX ~]$
Inside this instance, I am running a socket program for sending the data to my local system.
The program is running properly, but not able to connect to my local IP.
I am trying to ping my local system also from AWS ec2 user, but it is also not working.But I am able to ping google(8.8.8.8).
e.g. [ec2-user#ip-xxx-xx-xx-xx ~]$ ping xxx.xxx.xx.xx(my local IP)
I have set all security groups(inbound), like All Trafic,All TCP and so on.
Sorry for bad English.
Thank You
Your computer (PC) cannot be pinged from an AWS hosted machine
This is probably because the VM on your computer is using NAT outbound to talk to the LAN, which goes to an Internet router, which sends the packets to AWS
The reverse route (inbound to your PC) does not exist so starting a ping echo request from a AWS machine will not work
It is possible to get around this by opening a pass through on your router but generally this is not a great idea
However if you want to make a socket connection securely there is a way
First, start a ssh session with remote port forwarding. In the Linux ssh client this is using the -R option.
For example, if your local system is running a listening service on port 80 and your remote system has the address of 54.10.10.10 then
ssh -R 8080:localhost:80 ec2-user#54.10.10.10
Will establish a circuit such that connections to the "localhost" on the remote ec2 server on port 8080 are connected to the "localhost" on port 80 of your local machine
If you are not using a ssh cli program, most ssh clients have a facility of this sort.
Note that it is necessary to keep the ssh session open to be able to use the connections
I have PostgreSQL 9.4 running on a Linux VPS, and I need to be able to connect to it over SSH from both Linux and Windows clients. (I will later need to connect to multiple servers, and so that all clients use the same port numbers, I'm forwarding to port 5551 for the first server, then I will use 5552, 5553, etc.)
From a Linux client I just run ssh -fNg -L 5551:localhost:5432 user#remote1.com and connect to localhost:5551 with PGAdmin3 or any other client app. Works great.
On Windows, I'm using PuTTY and Pageant. I got the connection to user#remote1.com via terminal working, then I went to the SSH Tunnels and added L5432 localhost:5551. Terminal connection still works, but when I try to connect with PGAdmin3 to localhost:5551 I get an error:
could not connect to server: Connection refused (0x0000274AD/10061) Is the server running on host "localhost" (::1) and accepting TCP/IP connections on port 5551?
I resolved it. Like many things, this is obvious in hindsight. I had things backward in the SSH Tunnels setup in PuTTY. It needs to be L5551 remote1.com:5432