I am trying to build a perl based server that will accept incoming requests and, based on that request, read/write data to server's STDIN/OUT. So I need to accept data from the client STDIN, write it to the server's STDIN (where it is handled by another process), capture the results of that request from the server's STDOUT and then ship it to the client.
I was hoping to build this off of Net::Server but I cannot, for the life of me, figure out how to direct data specifically to the server's STDIN. In my ideal world I'd have a set of file handles like CLISTDIN, CLISTDOUT, SRVSTDIN, and SRVSTDOUT that I could discretely address and manage. I'm just at a loss of how to go about it in Net::Server.
I don't have to use Net::Server so other suggestions are welcome. Net::Server just has a number of other features I would like to use.
Thanks for any insight.
Have you thought about just using ssh? With ssh your can run commands on a remote computer (server) which redirects your stdin to that command, and the stdout from that command to yours. Simple demo:
$ echo "hello, world" | md5sum
22c3683b094136c3398391ae71b20f04 -
$ echo "hello, world" | ssh myserver.com md5sum
22c3683b094136c3398391ae71b20f04 -
Or
$ echo "hello, world" | ssh user#myserver.com "/path/any/command --with --args x y z"
(You can set up ssh for automatic login with ip check by ssh-keygen and ~/.ssh/authorized_keys wont say more about that here)
Turns out I could do this by overloading post_accept to change how STDIN/STDOUT were being used. I was able to create two distinct filehandles so I could refer to the client IN/OUT without breaking the servers IO. That said, Net::Server kept getting in the way of what IO wanted to do so I abandoned it and just rolled my own forking server. Ended up being easier overall.
Related
I am on an embedded platform (mipsel architecture, Linux 2.6 kernel) where I need to monitor IPC between two closed-source processes (router firmware) in order to react to a certain event (dynamic IP change because of DSL reconnect). What I found out so far via strace is that whenever the IP changes, the DSL daemon writes a special message into a UNIX domain socket bound to a specific file name. The message is consumed by another daemon.
Now here is my requirement: I want to monitor the data flow through that specific UNIX domain socket and trigger an event (call a shell script) if a certain message is detected. I tried to monitor the file name with inotify, but it does not work on socket files. I know I could run strace all the time, filtering its output and react to changes in the filtered log file, but that would be too heavy a solution because strace really slows down the system. I also know I could just poll for the IP address change via cron, but I want a watchdog, not a polling solution. And I am interested in finding out whether there is a tool which can specifically monitor UNIX domain sockets and react to specific messages flowing through in a predefined direction. I imagine something similar to inotifywait, i.e. the tool should wait for a certain event, then exit, so I can react to the event and loop back into starting the tool again, waiting for the next event of the same type.
Is there any existing Linux tool capable of doing that? Or is there some simple C code for a stand-alone binary which I could compile on my platform (uClibc, not glibc)? I am not a C expert, but capable of running a makefile. Using a binary from the shell is no problem, I know enough about shell programming.
It has been a while since I was dealing with this topic and did not actually get around to testing what an acquaintance of mine, Denys Vlasenko, maintainer of Busybox, proposed as a solution to me several months ago. Because I just checked my account here on StackOverflow and saw the question again, let me share his insights with you. Maybe it is helpful for somebody:
One relatively easy hack I can propose is to do the following:
I assume that you have a running server app which opened a Unix domain listening socket (say, /tmp/some.socket), and client programs connect to it and talk to the server.
rename /tmp/some.socket -> /tmp/some.socket1
create a new socket /tmp/some.socket
listen on it for new client connections
for every such connection, open another connection to /tmp/some.socket1 to original server process
pump data (client<->server) over resulting pairs of sockets (code to do so is very similar to what telnetd server does) until EOF from either side.
While you are pumping data, it's easy to look at it, to save it, and even to modify it if you need to.
The downside is that this sniffer program needs to be restarted every time the original server program is restarted.
This is similar to what Celada also answered. Thanks to him as well! Denys's answer was a bit more concrete, though.
I asked back:
This sounds hacky, yes, because of the restart necessity, but feasible.
Me not being a C programmer, I keep wondering though if you know a
command line tool which could do the pass-through and protocolling or
event-based triggering work for me. I have one guy from our project in
mind who could hack a little C binary for that, but I am unsure if he
likes to do it. If there is something pre-fab, I would prefer it. Can it
even be done with a (combination of) BusyBox applet(s), maybe?
Denys answered again:
You need to build busybox with CONFIG_FEATURE_UNIX_LOCAL=y.
Run the following as intercepting server:
busybox tcpsvd -vvvE local:/tmp/socket 0 ./script.sh
Where script.sh is a simple passthrough connection
to the "original server":
#!/bin/sh
busybox nc -o /tmp/hexdump.$$ local:/tmp/socket1 0
As an example, I added hex logging to file (-o FILE option).
Test it by running an emulated "original server":
busybox tcpsvd -vvvE local:/tmp/socket1 0 sh -c 'echo PID:$$'
and by connecting to "intercepting server":
echo Hello world | busybox nc local:/tmp/socket 0
You should see "PID:19094" message and have a new /tmp/hexdump.19093 file
with the dumped data. Both tcpsvd processes should print some log too
(they are run with -vvv verbosity).
If you need more complex processing, replace nc invocation in script.sh
with a custom program.
I don't think there is anything that will let you cleanly sniff UNIX socket traffic. Here are some options:
Arrange for the sender process to connect to a different socket where you are listening. Also connect to the original socket as a client. On receipt of data, notice the data you want to notice and also pass everything along to the original socket.
Monitor the system for IP address changes yourself using a netlink socket (RTM_NEWADDR, RTM_NEWLINK, etc...).
Run ip monitor as an external process and take action when it writes messages about added & removed IP addresses on its standard output.
Lets say I have an Emacs-Server running on some remote server, with all the libraries and software necessary for running my application.
Then I want several clients to connect to that remote machine, using Emacs-client. Does each client need a full Emacs installation, or is there a minimal installation that is just enough to communicate with the remote server, where all the action is?
Could this (Emacs-)client installation be so minimal, that almost all software-updates can be done on the server, without affecting the Emacs-clients?
Is there a reason not to run the clients remotely as well, and simply use a local display? That way, pretty much all you need on the local machines is the ssh client and the X Window server.
ssh -X (user)#(server) "emacsclient -c"
Edits for the comments:
This command starts a new client to connect to an existing Emacs server (which it assumes is already running). You can use "emacsclient -a '' -c" to automatically start emacs --daemon if there is no existing server, but I don't know whether you want the connecting user to be starting the server.
In fact, I'm pretty unsure about the whole multi-user side of this to be honest, as I've never done that before. Authentication for the above is handled by ssh, but there may well be subsequent permission issues to deal with, or similar, when the server and the clients are started by different users.
This approach should be possible with Windows/Cygwin as client and/or server, as Cygwin provides Emacs, OpenSSH, and X.org packages. (I regularly use Windows/Cygwin as a local display for Emacs running on Linux.) It may be harder to set up, though, and any permissions issues are probably different when you're using Cygwin.
I'm less sure how this would work without Cygwin. NTEmacs certainly won't talk to X.org, so I imagine you'd be terminal based in that instance. (There are probably other options, but Cygwin sounds to me like the best-integrated approach to using all of Emacs, SSH, and X on Windows).
Lastly, I imagine you're probably getting your "Connection refused" error because localhost is not running a sshd daemon? I would say that configuration of ssh is outside the scope of this question, but there are lots of resources online for that.
Depending on what you're trying to achieve, you may be able to use a combination of Emacs and Screen. By starting up Emacs from Screen on the remote machine and detaching from it, you can subsequently re-attach from a different machine that doesn't have Emacs. Again, whether this will work for you or not depends on what you're trying to do; however, for many Emacs use-cases, this can be very effective. If you're not familiar with using Screen in this manner, here is some reading material:
screen - The Terminal Multiplexer
I am not sure that would be possible. emacsclient uses tramp to connect to a remote server, and just by looking at the number of requires in the tramp elisp files (41) it seems very unlikely. You can try it yourself with the following:
zgrep -oE "\(require '[a-z-]+\)" *el.gz | sed -e 's%[a-z0-9-]\+\.el\.gz:%%g' | sort | uniq -cu | wc -l
I'm not an expert in emacsclient, but I don't think is was designed to do what you're looking for. I think the general use case is that emacsclient allows you to redirect new requests to open a file with emacs to a persistent emacs process to avoid what may be a bit of an overhead in startup time. You seem to be looking for more of a true client/server relationship.
I think to meet the goal you're aiming at you'll probably need to look a little outside emacs, probably a project unto itself - 'emacsRemoteClient. It boils down to one or two models; the file you want to edit would need to have it's path sent over to the server machine so that emacs could do some sort of remote tramp access & then spawn the xwindow locally (using the local X env or requiring an x server on windows)... or two, transferring the file to some temp location on the server box and again spawning the remote x window locally (followed by syncing the changes between the tmp & local file).
Would be cool to have something like that... but suspecting it'll involve a bit of work. Maybe we just need a version of emacs written in javascript and it can live in the cloud or on your browser... oh to have emacs keybindings in the browser ;-)
-Steve
I basically want to be able to send the apache log file line by line (tail) in between two servers (unidirectionally, from one two one), I want to use perl.
Any idea?, I would like be able to do things with each line of apache log in real time but in another server.
Thanks you!
Not sure about Perl (you can probably wrap this up in a bit of Perl so you can manipulate the data), but netcat (or nc for short) (should be available on most systems).
On one server
tail -f filename | nc -l 12345
On the other server
nc hostname 12345
Of course you can use a different port number. So I guess in Perl you would exec these commands (ssh to the remote server etc.). Hopefully this has given you some ideas! nc has loads of options so you should be able to find something.
If you want to write netcat in Perl then that's a slightly different story.
You can use piped logs. This way your perl script will get every log line on standard input and then it's up to you, how you are going to send them (SSH, FTP, HTTP or maybe even connect to SQL etc).
A very simple solution is to tail the log file from the remote host via SFTP using Net::SFTP::Foreign.
The module contains a sample script implementing the remote tail: sftp_tail.pl
I have managed successfully to server my Catalyst app on my development machine using Plack + Starman, using a daemon script I based on one I found in Dave Rolsky's Silki distribution.
I then set up nginx to reverse proxy to my Starman server, and aliased the static directory for nginx to serve. So far, so good. However, I am at a loss as to where my application STDERR is supposed to be logging to. It isn't reaching nginx (I suppose that makes sense) but I can't find much documentation as to where Starman may be logging it - if anywhere. I did have a look at Plack's Middleware modules but only saw options for access logs.
Can someone help me?
It's going nowhere. Catalyst::Log is sending data to STDERR, and the init script is sending STDERR to /dev/null.
You have a few basic choices:
Replace Catalyst::Log with something like Catalyst::Log::Log4perl or simply a subclass of Catalyst::Log with overridden _send_to_log -- either one will allow you to send the logging output somewhere other than STDERR.
Write some code that runs at the PSGI level to manage a logfile and reopen STDERR to it. I tried this, it wasn't very pleasant. Logfiles are harder than they look.
Use FastCGI instead, and you'll have an error stream that sends the log output back to the webserver. You can still use Plack via Plack::Handler::FCGI / Plack::Handler::FCGI::Engine (I'd recommend the latter, because the FCGI::Engine code is much newer and nicer than FCGI.pm).
I realise it is a long time since the question was asked, but I've just hit the same problem...
You actually have one more option than Hobbs mentioned.
It isn't quite the "init script" that is sending STDERR to /dev/null, it is Starman.
If you look at the source code for Starman, you would discover that, if you give it the --background flag, it uses MooseX::Daemonize::Core.
And once you know that, its documentation will tell you that it deliberately closes STDERR, STDOUT and STDIN and re-directs them to /dev/null, AND that it takes the environment variables MX_DAEMON_STDERR and MX_DAEMON_STDOUT as names of files to use instead.
So if you start your catalyst server with MX_DAEMON_STDERR set to a file name, STDERR will go to that file.
Today Starman has a --error-log command line option which allows you to redirect error messages to a file.
See documentation of starman:
--error-log
Specify the pathname of a file where the error log should be written. This enables you to still have access to the errors when using --daemonize.
I have a Perl Expect.pm script that does some moderately complex stuff like packaging applications, deploying the application, checking for logs, etc. on multiple remote unix hosts.
My predecessor had written similar scripts using rsh.
Is there a better approach between the two? Or should I use something all together different?
I am guessing somebody will bring up SSH; it's basically replacement for rsh, right? Unfortunately, however SSH is not an option for me right now.
Another thing I should add is that after logging in I need to be able to SUDO to a particular user to do most of the actions on the remote hosts.
Another thing I should add is that after logging in I need to be able to SUDO to a particular user to do most of the actions on the remote hosts.
To address this one particular point: using rsh (or ssh) you can specify which user to become in the remote session:
$ rsh -l username hostname
There's no need to use sudo in this case. Now would definitely be the time to look into ssh due to security issues. The syntax is the same, but ssh also allows a slightly different (and I'd say better) syntax:
$ ssh username#hostname
I found expect to be too finicky, but my experience with it is not substantial.
They do different things. Expect is a way to script what would otherwise be manual responses. rsh -- remote shell, not restricted shell, an unfortunate name clash -- allows you to run commands remotely on another system.
That said, the security holes and other disadvantages of using rsh to do remote commands, run sudo, etc, are immense.
I can see multiple ways of doing this:
Expect over telnet, rsh, or ssh
pros: single connection, fewer escaping issues
cons: fragility in a changing environment
rsh/ssh each command individually
pros: fewer escaping issues, more reliable in a changing environment
cons: each connection takes time for authentication, and, for ssh, handshaking the encryption
rsh/ssh all commands at once
pros: single connection (less overhead), more reliable than expect
cons: fragility in maintenance especially as you get more than a handful of statements in there, escaping issues are more prevalent (escape in perl so that it's still escaped by rsh/ssh so that it's still escaped by the remote shell so that it's properly handled by the sudo'd remote shell?)
rsh/ssh and run a script
pros: single connection, more reliable, more maintainable
cons: finding a way to get it over there (rcp/scp work, NFS works, you need to determine the best way for you).
All things considered, this is the most minor con as you could simply do something like
open my $fh, "|ssh user#host 'cat > /tmp/myscript'";
print $fh $script;
system qw(ssh user#host), "chmod u+x /tmp/myscript; /tmp/myscript; rm /tmp/myscript";
Of course, you'd add in some error handling (failed open, what if /tmp/myscript exists, etc.), but that's the idea.
Given a choice between rsh and telnet via expect, I would chose rsh. Expect scripts are fragile. All it takes to break one is someone changing the value of PS1 on the remote machine. Using rsh also prepares you for the day you will finally enter the '90s and start using ssh (since you can mostly just change rcp to scp and rsh to ssh and everything still works).