I'd like to create a named pipe, like the one created by "mkfifo", but one caveat. I want the pipe to be bidirectional. That is, I want process A to write to the fifo, and process B to read from it, and vice-versa. A pipe created by "mkfifo" allows process A to read the data its written to the pipe. Normally I'd use two pipes, but I am trying to simulate an actual device so I'd like the semantics of open(), read(), write(), etc to be as similar to the actual device as possible. Anyone know of a technique to accomplish this without resorting to two pipes or a named socket?
Or pty ("pseudo-terminal interface"). man pty.
Use a Unix-domain socket.
Oh, you said you don't want to use the only available solution - a Unix-domain socket.
In that case, you are stuck with opening two named pipes, or doing without. Or write your own device driver for them, of course - you could do it for the open source systems, anyway; it might be harder for the closed source systems (Windows, AIX, HP-UX).
Related
I just learned about pipes and sockets today and that sockets are special because they allow you to transmit file descriptors between processes.
I've also looked up that it seems to be sendmsg() and the msghdr structure that are used to produce this behavior.
My professor told me that pipes can't be used to replicate this behavior/feature, but I am interested exactly what part of the implementation allows sockets to do what pipes can't.
I have simple web-form with a little js script that sends form values to a text box. This combined value becomes a database query.
This will be sendt to dsmadmc (TSM administrative command line).
How can I use perl to keep the dsmadmc process open for consecutive input/output without the dsmadmc process closing between each input command sent?
And how can I capture the output - this is to be sent back to the same web page, in a separate div.
Any thought, anyone?
Probably IPC::Open2 could help. It allows to read/write to/from both input and output of an external process.
Beware of deadlocks though (i.e. situations where both your code and the app wait for their counterpart). You might want to use IO::Select to handle that.
P.S. I don't know how these modules behave on windows (.exe?..), but from a quick google search it looks like they are compatible.
I have inherited a 20-year-old interactive command-line unix application that is no longer supported by its vendor. We need to automate some tasks in this application.
The most troublesome of these is creating thousands of new records with slightly different parameters (e.g. different identifiers, different names). The records have to be created in sequence, one at a time, which would take many months (and therefore dollars) to do manually. In most cases, creating a record has a very predictable pattern of keying in commands, reading responses, keying in further commands, etc. However, some record creation operations will result in error conditions ('record with this identifier already exists') that require a different set of commands to be exit gracefully.
I can see a few different ways to do this:
Named pipes. Write a Perl script that runs the target application with STDIN and STDOUT set to named pipes then sends the target application the sequence of commands to create a record with the required parameters, and then instructs the target application to exit and shut down. We then run the script as many times as required with different parameters.
Application. Find another Unix tool that can be used to script interactive programs. The only ones I have been able to find though are expect, but this does not seem top be maintained; and chat, which I recall from ages ago, and which seems to do more-or-less what I want, but appears to be only for controlling modems.
One more potential complication: I think the target application was written for a VT100 terminal and it uses some sort of escape sequences to do things like provide highlighting.
My question is what approach should I take? One of these, or something completely different? I quite like the idea of using named pipes and then having a Perl script that opens the FIFOs and reads and writes as required, as it provides a lot of flexibility, but from what I have read it seems like there's a lot of potential problems if I go down this path.
Thanks in advance.
I'd definitely stick to Perl for the extra flexibility, as chaos suggested. Are you aware of the Expect perl module? It's a lot nicer than the named pipe approach.
Note also with named pipes, you can't force the output coming back from your legacy application to be unbuffered, which could be annoying. I think Expect.pm uses pseudo-ttys to get around this problem, but I'm not sure. See the discussion in perlipc in the section "Bidirectional Communication with Another Process" for more details.
expect is a lot more solid than you're probably giving it credit for, but if I were you I'd still go with the Perl option, wanting to have a full and familiar programming language for managing the process and having confidence that whatever weird issues arise, there will be ways of addressing them.
Expect, either with the Tcl or Perl implementations, would be my first attempt. If you are seeing odd sequences in the output because it's doing odd terminal things, just filter those from the output before you do your matching.
With named pipes, you're going to end up reinventing Expect anyway.
In Perl, how do I test if a file of open by another Perl program? I need the first program to be done before the second program can start.
alt text http://musicritics.com/wp/wp-content/uploads/2009/05/20061129morbo.gif
FILES DO NOT WORK THAT WAY!
But seriously, folks, advisory locks with flock are generally the best you can do. There's no way to guarantee that no other program wants to read or write a file at the same time as you.
You may be able to coordinate the two programs using flock: The first program would lock the file, and the second program would also try to acquire a lock on it, and it would block until the first program releases the lock.
Is flock() available on your system ? Otherwise, the two programs have be be synchronized, they can communicate thru a pipe or a socket, or via the presence/absence of a file.
Another direction if you are on a Unix-like system, could be use lsof output.
I assume having the first program starting the second one is not feasible.
In my experience, flock works fine on local systems on both Windows and Linux.
You could also, presumably, have the first program exec the second program when it's done processing the file.
If you are running on Windows, you could call CreateFile directly with a dwShareMode of 0.
According to MSDN:
Prevents other processes from opening a file or device if they request delete, read, or write access.
Win32API::File gives access to this call.
To specifically look for whether or not a file is open or in use, if you're on unix there is a wrapper to the lsof command to list open files: Unix::Lsof
If you're in Unix, you could also call fuser.
I might be misunderstanding the context and my comment may have limited usability in your case, but depending on what you are using the code for/on - Using serial queues to ensure that tasks to execute in a predictable order maybe an option. Your application (written in Perl) will need to explicitly create and manage the serial queues. For more details refer to the following link: GCD
There are so many possible errors in the POSIX environment. Why do some of them (like writing to an unconnected socket in particular) get special treatment in the form of signals?
This is by design, so that simple programs producing text (e.g. find, grep, cat) used in a pipeline would die when their consumer dies. That is, if you're running a chain like find | grep | sed | head, head will exit as soon as it reads enough lines. That will kill sed with SIGPIPE, which will kill grep with SIGPIPE, which will kill find with SEGPIPE. If there were no SIGPIPE, naively written programs would continue running and producing content that nobody needs.
If you don't want to get SIGPIPE in your program, just ignore it with a call to signal(). After that, syscalls like write() that hit a broken pipe will return with errno=EPIPE instead.
See this SO answer for a detailed explanation of why writing a closed descriptor / socket generates SIGPIPE.
Why is writing a closed TCP socket worse than reading one?
SIGPIPE isn't specific to sockets — as the name would suggest, it is also sent when you try to write to a pipe (anonymous or named) as well. I guess the reason for having separate error-handling behaviour is that broken pipes shouldn't always be treated as an error (whereas, for example, trying to write to a file that doesn't exist should always be treated as an error).
Consider the program less. This program reads input from stdin (unless a filename is specified) and only shows part of it at a time. If the user scrolls down, it will try to read more input from stdin, and display that. Since it doesn't read all the input at once, the pipe will be broken if the user quits (e.g. by pressing q) before the input has all been read. This isn't really a problem, though, so the program that's writing down the pipe should handle it gracefully.
it's up to the design.
at the beginning people use signal to control events notification which were sent to the user space, and later it is not necessary because there're more popular skeletons such as polling which don't require a system caller to make a signal handler.