What is a faster alternative to Python's http.server (or SimpleHTTPServer)? - command-line

Python's http.server (or SimpleHTTPServer for Python 2) is a great way of serve the contents of the current directory from the command line:
python -m http.server
However, as far as web servers go, it's very slooooow...
It behaves as though it's single threaded, and occasionally causes timeout errors when loading JavaScript AMD modules using RequireJS. It can take five to ten seconds to load a simple page with no images.
What's a faster alternative that is just as convenient?

http-server for node.js is very convenient, and is a lot faster than Python's SimpleHTTPServer. This is primarily because it uses asynchronous IO for concurrent handling of requests, instead of serialising requests.
Installation
Install node.js if you haven't already. Then use the node package manager (npm) to install the package, using the -g option to install globally. If you're on Windows you'll need a prompt with administrator permissions, and on Linux/OSX you'll want to sudo the command:
npm install http-server -g
This will download any required dependencies and install http-server.
Use
Now, from any directory, you can type:
http-server [path] [options]
Path is optional, defaulting to ./public if it exists, otherwise ./.
Options are [defaults]:
-p The port number to listen on [8080]
-a The host address to bind to [localhost]
-i Display directory index pages [True]
-s or --silent Silent mode won't log to the console
-h or --help Displays help message and exits
So to serve the current directory on port 8000, type:
http-server -p 8000

I recommend: Twisted (http://twistedmatrix.com)
an event-driven networking engine written in Python and licensed under the open source MIT license.
It's cross-platform and was preinstalled on OS X 10.5 to 10.12. Amongst other things you can start up a simple web server in the current directory with:
twistd -no web --path=.
Details
Explanation of Options (see twistd --help for more):
-n, --nodaemon don't daemonize, don't use default umask of 0077
-o, --no_save do not save state on shutdown
"web" is a Command that runs a simple web server on top of the Twisted async engine. It also accepts command line options (after the "web" command - see twistd web --help for more):
--path= <path> is either a specific file or a directory to be
set as the root of the web server. Use this if you
have a directory full of HTML, cgi, php3, epy, or rpy
files or any other files that you want to be served up
raw.
There are also a bunch of other commands such as:
conch A Conch SSH service.
dns A domain name server.
ftp An FTP server.
inetd An inetd(8) replacement.
mail An email service
... etc
Installation
Ubuntu
sudo apt-get install python-twisted-web (or python-twisted for the full engine)
Mac OS-X (comes preinstalled on 10.5 - 10.12, or is available in MacPorts and through Pip)
sudo port install py-twisted
Windows
installer available for download at http://twistedmatrix.com/
HTTPS
Twisted can also utilise security certificates to encrypt the connection. Use this with your existing --path and --port (for plain HTTP) options.
twistd -no web -c cert.pem -k privkey.pem --https=4433

go 1.0 includes a http server & util for serving files with a few lines of code.
package main
import (
"fmt"; "log"; "net/http"
)
func main() {
fmt.Println("Serving files in the current directory on port 8080")
http.Handle("/", http.FileServer(http.Dir(".")))
err := http.ListenAndServe(":8080", nil)
if err != nil {
log.Fatal("ListenAndServe: ", err)
}
}
Run this source using go run myserver.go or to build an executable go build myserver.go

Try webfs, it's tiny and doesn't depend on having a platform like node.js or python installed.

If you use Mercurial, you can use the built in HTTP server. In the folder you wish to serve up:
hg serve
From the docs:
export the repository via HTTP
Start a local HTTP repository browser and pull server.
By default, the server logs accesses to stdout and errors to
stderr. Use the "-A" and "-E" options to log to files.
options:
-A --accesslog name of access log file to write to
-d --daemon run server in background
--daemon-pipefds used internally by daemon mode
-E --errorlog name of error log file to write to
-p --port port to listen on (default: 8000)
-a --address address to listen on (default: all interfaces)
--prefix prefix path to serve from (default: server root)
-n --name name to show in web pages (default: working dir)
--webdir-conf name of the webdir config file (serve more than one repo)
--pid-file name of file to write process ID to
--stdio for remote clients
-t --templates web templates to use
--style template style to use
-6 --ipv6 use IPv6 in addition to IPv4
--certificate SSL certificate file
use "hg -v help serve" to show global options

Here's another. It's a Chrome Extension
Once installed you can run it by creating a new tab in Chrome and clicking the apps button near the top left
It has a simple gui. Click choose folder, then click the http://127.0.0.1:8887 link
https://www.youtube.com/watch?v=AK6swHiPtew

I found python -m http.server unreliable—some responses would take seconds.
Now I use a server called Ran https://github.com/m3ng9i/ran
Ran: a simple static web server written in Go

Also consider devd a small webserver written in go. Binaries for many platforms are available here.
devd -ol path/to/files/to/serve
It's small, fast, and provides some interesting optional features like live-reloading when your files change.

If you have PHP installed you could use the builtin server.
php -S 0:8080

give polpetta a try ...
npm install -g polpetta
then you can
polpetta ~/folder
and you are ready to go :-)

Using Servez as a server
Download Servez
Install It, Run it
Choose the folder to serve
Pick "Start"
Go to http://localhost:8080 or pick "Launch Browser"
Note: I threw this together because Web Server for Chrome is going away since Chrome is removing support for apps and because I support art students who have zero experience with the command line

Yet another node based simple command line server
https://github.com/greggman/servez-cli
Written partly in response to http-server having issues, particularly on windows.
installation
Install node.js then
npm install -g servez
usage
servez [options] [path]
With no path it serves the current folder.
By default it serves index.html for folder paths if it exists. It serves a directory listing for folders otherwise. It also serves CORS headers. You can optionally turn on basic authentication with --username=somename --password=somepass and you can serve https.

I like live-server. It is fast and has a nice live reload feature, which is very convenient during developpement.
Usage is very simple:
cd ~/Sites/
live-server
By default it creates a server with IP 127.0.0.1 and port 8080.
http://127.0.0.1:8080/
If port 8080 is not free, it uses another port:
http://127.0.0.1:52749/
http://127.0.0.1:52858/
If you need to see the web server on other machines in your local network, you can check what is your IP and use:
live-server --host=192.168.1.121
And here is a script that automatically grab the IP address of the default interface. It works on macOS only.
If you put it in .bash_profile, the live-server command will automatically launch the server with the correct IP.
# **
# Get IP address of default interface
# *
function getIPofDefaultInterface()
{
local __resultvar=$1
# Get default route interface
if=$(route -n get 0.0.0.0 2>/dev/null | awk '/interface: / {print $2}')
if [ -n "$if" ]; then
# Get IP of the default route interface
local __IP=$( ipconfig getifaddr $if )
eval $__resultvar="'$__IP'"
else
# Echo "No default route found"
eval $__resultvar="'0.0.0.0'"
fi
}
alias getIP='getIPofDefaultInterface IP; echo $IP'
# **
# live-server
# https://www.npmjs.com/package/live-server
# *
alias live-server='getIPofDefaultInterface IP && live-server --host=$IP'

I've been using filebrowser for the past couple of years and it is the best alternative I have found.
Features I love about it:
Cross-platform: It supports Linux, MacOs and Windows (+). It also supports docker (+).
Downloading stuff is a breeze. It can automatically convert a folder into zip, tar.gz and etc. for transferring folders.
You can file or folder access to every use.

Related

are there ways to use VS code plugins in google cloud shell?

I have a few quick navigation plugins such as "block travel" I use all the time. Is there a way to use these in cloud shell?
I imagine there are some restrictions, but even some simple editor plugins can be huge timesavers.
While I'm at it - alt-D to duplicate a line, or transpose lines - some of those seem to be missing and hard to use key remapping to get working, at least within the shell. In general maybe keyboard shortcuts seem to get trapped by the browser or PWA wrapper. I'm using cloudshell as a webapp on a chromebook FWIW, for various secure projects.
I have come up with a solution that covers both aspects of your question
To get Unlimited Persistent Disk:
You can use Google Cloud Storage FUSE
Google Cloud Storage FUSE lets you mount a GCS bucket as a folder to your linux instance. By doing that you get an “unlimited '' persistent disk and it is super simple to set up since gcsfuse is already installed in cloud shell.
1. Create a GCS bucket (you only need to run this once) -- replace BUCKET_NAME with any name:
gsutil mb "gs://BUCKET_NAME/"
2. Create a local directory for mounting -- replace FOLDER_NAME with the chosen directory name:
mkdir /home/[USER]/[FOLDER_NAME]
chmod 777 /home/[USER]/[FOLDER_NAME]
3. Mount the bucket onto the local filesystem (note: you need to re-run this every time Cloud Shell starts)
gcsfuse -o nonempty -file-mode=777 -dir-mode=777 --uid=1000 --debug_gcs [BUCKET_NAME] /home/[USER]/[FOLDER_NAME]
To use third party plugins in cloud shell:
You can use an environment customization script (.customization_environment) as mentioned in the public documentation. It allows you to install additional packages into your Cloud Shell environment when it starts.
For reference, below are the steps to install VS Code plug-in.
Step 1:
To install the VSCode server, run the script named visual_studio_code.sh as below, in the root directory workspace of Cloud Shell Editor.
visual_studio_code.sh file:
export VERSION=`curl -s https://api.github.com/repos/cdr/code-server/releases/latest | grep -oP '"tag_name": "\K(.*)(?=")'`
wget https://github.com/cdr/code-server/releases/download/$VERSION/code-server-3.10.2-linux-amd64.tar.gz
tar -xvzf code-server-3.10.2-linux-amd64.tar.gz
cd code-server-3.10.2-linux-amd64
Run the script using the below command in shell,
./visual_studio_code.sh
if getting permission denied error then run this following command in shell,
chmod +x visual_studio_code.sh
./visual_studio_code.sh
Step 2:
Make a customization script in the root directory workspace in Cloud Shell Editor to start VS Code Server on boot with the below commands :
.customization_environment file :
#!/bin/sh
#.customize_environmnet run in background as root, wait for your user to initialize
sleep 20
sudo -u [USER] /home/[USER]/code-server-3.10.2-linux-amd64/code-server --auth none --port 9090
Step 3:
To view Visual Studio Code server on port 9090 :
Click on Web Preview > Change Port > 9090
If getting a 404 error, remove ‘?authuser=0’ from the url.
Visual Studio Code Server will now be running on the browser!!!
Block travel navigation plugin:
To have the block travel navigation plugin in cloud shell,follow the below commands and run them in shell in root directory:
wget https://github.com/efatsi/block-travel/archive/refs/tags/v1.0.0.tar.gz
tar xzvf v1.0.0.tar.gz
ls
#You will see block-travel-1.0.0
block-travel-1.0.0/keymaps/block-travel.cson --auth none --port 9090
#You might get Permission denied if yes, then follow the next two commands else go to webport view in 9090
chmod +x block-travel-1.0.0/keymaps/block-travel.cson
block-travel-1.0.0/keymaps/block-travel.cson --auth none --port 9090
Open the webport view in 9090, you will be able to navigate through the vs code files using :
Alt+up for block-travel.jumpUp
Alt+shift+up for block-travel.selectUp
Alt+down for block-travel.jumpDown
Alt+shift+down for block-travel.selectDown
WARNING: This should not be considered a long term solution, just a stop gap until this is supported in an easier fashion.
This might not be the greatest idea but it does seem to work for the vim extension I tried in my environment. Probably best to make a request through the in product feedback to get it officially added but until then you can follow these steps.
Upload the .vsix package to your $HOME directory.
Unzip the package into the /google/devshell/editor/theia/plugins directory. This action will not persist so you'll want to add the command to the .customize_environment script actions.
e.g.
sudo unzip vscodevim.vsix -d /google/devshell/editor/theia/plugins/vscode-vim
Now for the questionable part. You'll want to install the pslist package to make life easy so you have access to the rkill command. You probably also want to add this to the .customize_environment file as well since it also will not persist.
sudo apt install pslist
Now we need to get the process id for the editor. Currently this seems to be spawned by a supervisord command, which also spawns the tmux section so we're going to grab the process id that is from the runuser command it spawns (and filter for the theia one just in case).
ps ax | grep runuser | grep "theia start"
Then we can use rkill to kill the process and all of the its children, which will cause supervisord to restart it for us and the plugin should be available.
sudo rkill PID_OF_GREP_OUTPUT
I'm not sure the best way to script the rkill command yet, since I don't know the timing of when it's up vs the .customize_environment execution, so right now I run this each time I start up a new VM.
If anything goes horribly wrong, you should be able to request a restart of the VM from the menu options and get a fresh one.
Cloud Shell offers VS Code editor experience through Theia. Did you try cloud Code editor in the cloud shell? that is exposed through "Open Editor" button on the top right, this will open cloud code editor that gives you VSCode experience. You have all the navigation keys that are available in the editor.

How do I use SFTP without SSH?

I want a fast and flexible file server but I don't need encryption or authentication. How can I use SFTP for this on Linux systems?
SFTP happens to be used by SSH servers but it's a well-developed protocol that works well on its own. The sftp-server developed by OpenSSH has no dependency on an SSH server; sftp-server uses standard input/output. (Other SFTP servers are similar.)
It is trivial to share a filesystem via SFTP, similar to what you might do with NFS but without the need for root access. I'll use socat as the daemon for this ad-hoc example, but xinetd would make a more permanent solution. The location of sftp-server is from my Ubuntu installation of the openssh-sftp-server package.
On the server:
$ mkdir shared_to_the_world
$ cd shared_to_the_world
$ socat tcp-listen:1234,reuseaddr,fork exec:/usr/lib/openssh/sftp-server
On the client:
$ mkdir /tmp/sftp_test
$ sshfs -o reconnect,ssh_command="nc my_sftp_server_address 1234 --" : /tmp/sftp_test
$ cd /tmp/sftp_test
Now your client (and anyone else's!) can seamlessly work with the files in the shared directory on the server. Both read and write are enabled, so be careful.
Consider using socat listen's "bind" and "range" options to limit the access to your server.
You can’t use SFTP without SSH. Take a look at : https://www.ssh.com/ssh/sftp/ (emphasis mine) :
“ SFTP (SSH File Transfer Protocol) is a secure file transfer protocol. It runs over the SSH protocol.. It supports the full security and authentication functionality of SSH.
...
SFTP port number is the SSH port 22 ... It is basically just an SSH server. Only once the user has logged in to the server using SSH can the SFTP protocol be initiated. There is no separate SFTP port exposed on servers. “

How to deploy a meteor application to my own server?

How to deploy a meteor application to my own server?
flavour 1: the development and deployment server are the same;
flavour 2: the development server is one (maybe my localhost) and the deployment server is another (maybe a VPS in the cloud);
flavour 3: I want to make a "meteor hosting" domain, just like "meteor.com". Is it possible? How?
Update:
I'm running Ubuntu and I don't want to "demeteorize" the application. Thank you.
Meteor documentation currently says:
"[...] you need to provide Node.js 0.8 and a MongoDB server. You can
then run the application by invoking node, specifying the HTTP port
for the application to listen on, and the MongoDB endpoint."
So, among the several ways to install Node.js, I got it up and running following the best advice I found, which is basically unpacking the latest version available directly in the official Node.JS website, already compiled for Linux (64 bits, in my case):
# Does NOT need to be root user:
# create directory
mkdir -p ~/.nodes && cd ~/.nodes
# download latest Node.js distribution
curl -O http://nodejs.org/dist/v0.10.13/node-v0.10.13-linux-x64.tar.gz
# unpack it
tar -xzf node-v0.10.13-linux-x64.tar.gz
# discard it
rm node-v0.10.13-linux-x64.tar.gz
# rename unpacked folder
mv node-v0.10.13-linux-x64 0.10.13
# create symlink
ln -s 0.10.13 current
# add path to PATH
export PATH="~/.nodes/current/bin:$PATH"
# check
node --version
npm --version
And to install MongoDB, I simply followed the instructions in the MongoDB manual available in the Documentation section of its official website:
# Needs to be root user (apply "sudo" if not at root shell)
apt-key adv --keyserver keyserver.ubuntu.com --recv 7F0CEB10
echo 'deb http://downloads-distro.mongodb.org/repo/ubuntu-upstart dist 10gen' | tee /etc/apt/sources.list.d/10gen.list
apt-get update
apt-get install mongodb-10gen
The server is ready to run Meteor applications! For deployment, the main "issue" is where the "bundle" operation happens. We need to run meteor bundle command from inside the application source files tree. For example:
cd ~/leaderboard
meteor bundle leaderboard.tar.gz
If the deployment will happen in another server (flavour 2), we need to upload the bundle tar.gz file to it, using sftp, ftp, or any other file transfer method. Once the file is there, we follow both Meteor documentation and the README file which is magically included in the root of the bundle tree:
# unpack the bundle
tar -xvzf leaderboard.tar.gz
# discard tar.gz file
rm leaderboard.tar.gz
# rebuild native packages
pushd bundle/programs/server/node_modules
rm -r fibers
npm install fibers#1.0.1
popd
# setup environment variables
export MONGO_URL='mongodb://localhost'
export ROOT_URL='http://example.com'
export PORT=3000
# start the server
node main.js
If the deployment will be in the same server (flavour 1), the bundle tar.gz file is already there, and we don't need to recompile the native packages. (Just jump the corresponding section above.)
Cool! With these steps, I've got the "Leaderboard" example deployed to my custom server, not "meteor.com"... (only to learn and value their services!)
I still have to make it run on port 80 (I plan to use NginX for this), persist environment variables, start Node.JS dettached from terminal, et cetera... I am aware this setup in a "barely naked" one... just the base, the first step, basic foundation stones.
The application has been "manually" deployed, without taking advantage of all meteor deploy command magic features... I've seen people published their "meteor.sh" and "meteoric.sh" and I am following the same path... create a script to emulate the "single command deploy" feature... aware that in the near future all this stuff will be part of the pioneer Meteor explorers only, as it will grow into a whole Galaxy! and most of these issues will be an archaic thing of the past.
Anyway, I am very happy to see how fast the deployed application runs in the cheapest VPS ever, with a surprisingly low latency and almost instant simultaneous updates in several distinct browsers. Fantastic!
Thank you!!!
Try Meteor Up too
With that you can deploy into any Ubuntu server. This uses meteor build command internally. And used by many for deploying production apps.
I created Meteor Up to allow developers to deploy production quality Meteor apps until Galaxy comes.
I would recommend flavor two with a separate deployment server. Separation of concerns leads to a more stable environment for your code and its easier to debug.
To do it, there's the excellent Meteoric bash script that helps you deploy to Amazon's EC2 or your own server.
As for how to roll your own meteor.com, I suggest you break that out into it's own StackOverflow question as it's not related. Plus, I can't answer it :)
I done with it few days ago. I deployed my Meteor application to my own server on the DigitalOcean. I used Meteor Up tool for managing deploys and Nginx on the server to serve the app.
It's very simple to use. You should install meteor up with the command:
npm install -g mup
Then create the folder for deployment configuration and go to the created directory. Then run mup init command. It will created two configuration files. We are have interest for mup.json file. It have configurations for deployment process. It's looks like this:
{
// Server authentication info
"servers": [
{
"host": "hostname",
"username": "root",
"password": "password",
// or pem file (ssh based authentication)
//"pem": "~/.ssh/id_rsa",
// Also, for non-standard ssh port use this
//"sshOptions": { "port" : 49154 },
// server specific environment variables
"env": {}
}
],
// Install MongoDB on the server. Does not destroy the local MongoDB on future setups
"setupMongo": true,
// WARNING: Node.js is required! Only skip if you already have Node.js installed on server.
"setupNode": true,
// WARNING: nodeVersion defaults to 0.10.36 if omitted. Do not use v, just the version number.
"nodeVersion": "0.10.36",
// Install PhantomJS on the server
"setupPhantom": true,
// Show a progress bar during the upload of the bundle to the server.
// Might cause an error in some rare cases if set to true, for instance in Shippable CI
"enableUploadProgressBar": true,
// Application name (no spaces).
"appName": "meteor",
// Location of app (local directory). This can reference '~' as the users home directory.
// i.e., "app": "~/Meteor/my-app",
// This is the same as the line below.
"app": "/Users/arunoda/Meteor/my-app",
// Configure environment
// ROOT_URL must be set to https://YOURDOMAIN.com when using the spiderable package & force SSL
// your NGINX proxy or Cloudflare. When using just Meteor on SSL without spiderable this is not necessary
"env": {
"PORT": 80,
"ROOT_URL": "http://myapp.com",
"MONGO_URL": "mongodb://arunoda:fd8dsjsfh7#hanso.mongohq.com:10023/MyApp",
"MAIL_URL": "smtp://postmaster%40myapp.mailgun.org:adj87sjhd7s#smtp.mailgun.org:587/"
},
// Meteor Up checks if the app comes online just after the deployment.
// Before mup checks that, it will wait for the number of seconds configured below.
"deployCheckWaitTime": 15
}
After you fill all data fields you can start the setup process with command mup setup. It will setup your server.
After sucessfull setup you can deploy your app. Just type mup deploy in the console.
Another alternative is to just develop on your own server to start with.
I just created a Digital Ocean box and then connected my Cloud9 IDE account.
Now, I can develop right on the machine in a Cloud IDE and deployment is easy--just copying files.
I created a tutorial that shows exactly how my set up works.
I had a lot of trouble with meteor up, so I decided writing my own deploy script. I also added additional info how to set up nginx or mongodb. Hope it helps!
See /sh folder in repository
What the script meteor-deploy.sh does:
Select environment (./meteor-deploy.sh for staging, ./meteor-deploy.sh prod for production)
Build and bundle production version of the meteor app
Copy bundle to server
SSH into server
Do a mongodump to backup database
Stop the running app
Unpack bundle
Overwrite app files
Re-install app node package dependencies
Start the app (uses forever)
Tested for the following server configurations:
Ubuntu 14.04.4 LTS
meteor --version 1.3.2.4
node --version v0.10.41
npm --version 3.10.3

Google App Engine Java on Eclipse can not connect to localhost

Usage: [options]
Options:
--help, -h Show this help message and exit.
--server=SERVER The server to use to determine the latest
-s SERVER SDK version.
--address=ADDRESS The address of the interface on the local machine
-a ADDRESS to bind to (or 0.0.0.0 for all interfaces).
--port=PORT The port number to bind to on the local machine.
-p PORT
--sdk_root=DIR Overrides where the SDK is located.
--disable_update_check Disable the check for newer SDK versions.
--generated_dir=DIR Set the directory where generated files are created.
--jvm_flag=FLAG Pass FLAG as a JVM argument. May be repeated to
supply multiple flags.
I had come across similar problem while working with Google App Engine for Python-loalhost was not getting its connection established.
$fuser -k 8080/tcp
Try this in terminal/command prompt and restart localhost.
It worked for me. Hope it works for you also. Good luck!

How can I automate deployment through multiple ssh firewalls (using PW auth)?

I'm stuck in a bit of annoying situation.
There's a chain of machines between my desktop and the production servers. Something like this:
desktop -> firewall 1 -> firewall 2 -> prod_box 1
-> prod_box 2
-> ...
I'm looking for a way to automate deployment to the prod boxes via ssh.
I'm aware there are a number of solutions in general, but my restrictions are:
No changes permitted to firewall 2
No config changes permitted to prod boxes (only content)
firewall 1 has a local user account for me
firewall 2 and prod are accessed as root
port 22 is the only open port between each link
So, in general the command sequence I do to deploy is:
scp archive.tar user#firewall1:archive.tar
ssh user#firewall1
scp archive.tar root#firewall2:/tmp/archive.tar
ssh root#firewall2
scp /tmp/archive.tar root#prod1:/tmp/archive.tar
ssh root#prod1
cd /var/www/
tar xvf /tmp/archive.tar
Its a bit more complex than that in reality, but that's a basic summary of the tasks to do.
I've put my ssh key in firewall1:/home/user/.ssh/authorized_keys, so that's no problem.
However, I can't do this for firewall2 or prod boxes.
It'd be great if I could run this (commands above) from a shell script locally, type my password in 4 times and be done with it. Sadly I cannot figure out how to do that.
I need some way to chain ssh commands. I've spent all afternoon trying to use python to do this and eventually given up because the ssh libraries don't seem to support password-entry-style login.
What can I do here?
There must be some kind of library I can use to:
login via ssh using either a key file OR a dynamically entered password
remote remote shell commands through the chain of ssh tunnels
I'm not really sure what to tag this question, so I've just left it as ssh, deployment for now.
NB. It'd be great to use ssh tunnels and a deployment tool to push these changes out, but I'd still have to manually login to each box to setup the tunnel, and that wont work anyway, because of the port blocking.
I am working on Net::OpenSSH::Gateway, an extension for my other Perl module Net::OpenSSH that does just that.
For instance:
use Net::OpenSSH;
use Net::OpenSSH::Gateway;
my $gateway = Net::OpenSSH::Gateway->find_gateway(
proxies => ['ssh://user#firewall1',
'ssh://password:root#firewall2'],
backend => 'perl');
for my $host (#prod_hosts) {
my $ssh = Net::OpenSSH->new($host, gateway => $gateway);
if ($ssh->error) {
warn "unable to connect to $host\n";
next;
}
$ssh->scp_put($file_path, $destination)
or warn "scp for $host failed\n";
}
It requires Perl available in both firewalls, but no write permissions or installing any additional software there.
Unfortunately this isn't possible to do as one shell script. I did try, but ssh's password negotiation requires an interactive terminal, which you don't get with chained ssh commands. You could do it with passwordless keys, but since that's highly insecure and you can't do it anyway, nevermind.
The basic idea is that each server sends a bash script to the next one, which is then activated and sends the next one (and so on) until it reaches the last one, which does the distribution.
However, since this requires an interactive terminal at each stage, you're going to need to follow the payload down the chain manually executing each script as you go, somewhat as you do now but with less typing.
Obviously, you will need to customise them a bit, but try these scripts:
script1.sh
#!/bin/bash
user=doug
firewall1=firewall_1
#Minimise password entries across the board.
tar cf payload1.tar script3.sh archive.tar
tar cf payload2.tar script2.sh payload1.tar
scp payload2.tar ${user}#${firewall1}:payload2.tar
ssh ${user}#${firewall1} "tar xf payload2.tar;chmod +x script2.sh"
echo "Now connect to ${firewall1} and run ./script.sh"
script2.sh
#!/bin/bash
user=root
firewall2=firewall_2
# Minimise password entries
scp payload1.tar ${user}#${firewall2}:/tmp/payload1.tar
ssh ${user}#${firewall2} "cd /tmp;tar xf payload1.tar;chmod +x script3.sh"
echo "Now connect to ${firewall2} and run /tmp/script3.sh"
script3.sh
#!/bin/bash
user=root
hosts="prod1 prod2 prod3 prod4"
for host in $hosts
do
echo scp archive.tar ${user}#${host}:/tmp/archive.tar
echo ssh ${user}#${host} "cd /var/www; tar xvf /tmp/archive.tar"
done
It does require 3 password entries per firewall which is a bit annoying, but such is life.
This do you any good?