If a process was abruptly stopped by using "kill" command and due to that it wouldn't delete the UDS sockets that were created by that process. In such cases, can we use "rm" command to delete the socket file.
By any chance even after deleting the socket file using "rm" command, will there be any stale entities present in the kernel which might cause "Bad file descriptor" error if any other process tries to create the socket file with same name.?
Yes, you can delete it. It would be sensible for a program that uses UNIX domain sockets to delete its own leftover socket, if it is still there when the program starts.
"Bad file descriptor" is always a bug in the program. It means your program tried to use some file descriptor number that wasn't actually a file descriptor.
The "Notes" section of the Man 7 unix page says:
Binding to a socket with a filename creates a socket in the file
system that must be deleted by the caller when it is no longer needed
(using unlink(2)). The usual UNIX close-behind semantics apply; the
socket can be unlinked at any time and will be finally removed from
the file system when the last reference to it is closed.
According to the man page, you should be able to unlink() the file of the file system immediately after bind():ing the filename for the socket.
So the file will be removed automatically when it becomes unusable.
With Linux, you can also bind "Abstract names" for Unix domain sockets. That way you can avoid usage of filesystem with Unix domain sockets.
Related
I am learning to write character device drivers from the Kernel Module Programming Guide, and used mknod to create a node in /dev to talk to my driver.
However, I cannot find any obvious way to remove it, after checking the manpage and observing that rmnod is a non-existent command.
What is the correct way to reverse the effect of mknod, and safely remove the node created in /dev?
The correct command is just rm :)
A device node created by mknod is just a file that contains a device major and minor number. When you access that file the first time, Linux looks for a driver that advertises that major/minor and loads it. Your driver then handles all I/O with that file.
When you delete a device node, the usual Un*x file behavior aplies: Linux will wait until there are no more references to the file and then it will be deleted from disk.
Your driver doesn't really notice anything of this. Linux does not automatically unload modules. Your driver wil simply no longer receive requests to do anything. But it will be ready in case anybody recreates the device node.
You are probably looking for a function rather than a command. unlink() is the answer. unlink() will remove the file/special file if no process has the file open. If any processes have the file open, then the file will remain until the last file descriptor referring to it is closed. Read more here: http://man7.org/linux/man-pages/man2/unlink.2.html
I have a server with Apache.
I have a problem with concurrent read-write operations on one file.
Assume I have index.html file in Apache DocRoot. In browser I can open a read it.
I'm using Eclipse IDE to modify files directly on server through SSH (or FTP).
After made some chages to the file I'm uploading it to server. Upload takes some time.
Problem is: if I try to view file in browser WHILE FILE IS UPLOADING uploading hangs and target file becomes blank. It looks like apache and SSH server both trying to access file, SSH to write, Apache to read. Collision breaks everything.
Any ideas how to avoid this? Maybe some SSH server config options or Apache module?
You need to lock the file first. Do you know what operating system and apache config you use, is it your own system?
Here is a quote from the apache server docs:
EnableMMAP Directive
Description:
Use memory-mapping to read files during delivery
Syntax:
EnableMMAP On|Off
Default:
EnableMMAP On
Context:
server config, virtual host, directory, .htaccess
Override:
FileInfo
Status:
Core
Module:
core
This directive controls whether the httpd may use memory-mapping if it needs to read the contents of a file during delivery. By default, when the handling of a request requires access to the data within a file -- for example, when delivering a server-parsed file using mod_include -- Apache httpd memory-maps the file if the OS supports it.
This memory-mapping sometimes yields a performance improvement. But in some environments, it is better to disable the memory-mapping to prevent operational problems:
•On some multiprocessor systems, memory-mapping can reduce the performance of the httpd.
•Deleting or truncating a file while httpd has it memory-mapped can cause httpd to crash with a segmentation fault.
For server configurations that are vulnerable to these problems, you should disable memory-mapping of delivered files by specifying:
EnableMMAP Off
For NFS mounted files, this feature may be disabled explicitly for the offending files by specifying:
EnableMMAP Off
as your server is crashing, I suspect that you have this option 'set' for the directory your file is in.
Add the
AllowMMAP Off
to the .htaccess file for your directory.
I am on an embedded platform (mipsel architecture, Linux 2.6 kernel) where I need to monitor IPC between two closed-source processes (router firmware) in order to react to a certain event (dynamic IP change because of DSL reconnect). What I found out so far via strace is that whenever the IP changes, the DSL daemon writes a special message into a UNIX domain socket bound to a specific file name. The message is consumed by another daemon.
Now here is my requirement: I want to monitor the data flow through that specific UNIX domain socket and trigger an event (call a shell script) if a certain message is detected. I tried to monitor the file name with inotify, but it does not work on socket files. I know I could run strace all the time, filtering its output and react to changes in the filtered log file, but that would be too heavy a solution because strace really slows down the system. I also know I could just poll for the IP address change via cron, but I want a watchdog, not a polling solution. And I am interested in finding out whether there is a tool which can specifically monitor UNIX domain sockets and react to specific messages flowing through in a predefined direction. I imagine something similar to inotifywait, i.e. the tool should wait for a certain event, then exit, so I can react to the event and loop back into starting the tool again, waiting for the next event of the same type.
Is there any existing Linux tool capable of doing that? Or is there some simple C code for a stand-alone binary which I could compile on my platform (uClibc, not glibc)? I am not a C expert, but capable of running a makefile. Using a binary from the shell is no problem, I know enough about shell programming.
It has been a while since I was dealing with this topic and did not actually get around to testing what an acquaintance of mine, Denys Vlasenko, maintainer of Busybox, proposed as a solution to me several months ago. Because I just checked my account here on StackOverflow and saw the question again, let me share his insights with you. Maybe it is helpful for somebody:
One relatively easy hack I can propose is to do the following:
I assume that you have a running server app which opened a Unix domain listening socket (say, /tmp/some.socket), and client programs connect to it and talk to the server.
rename /tmp/some.socket -> /tmp/some.socket1
create a new socket /tmp/some.socket
listen on it for new client connections
for every such connection, open another connection to /tmp/some.socket1 to original server process
pump data (client<->server) over resulting pairs of sockets (code to do so is very similar to what telnetd server does) until EOF from either side.
While you are pumping data, it's easy to look at it, to save it, and even to modify it if you need to.
The downside is that this sniffer program needs to be restarted every time the original server program is restarted.
This is similar to what Celada also answered. Thanks to him as well! Denys's answer was a bit more concrete, though.
I asked back:
This sounds hacky, yes, because of the restart necessity, but feasible.
Me not being a C programmer, I keep wondering though if you know a
command line tool which could do the pass-through and protocolling or
event-based triggering work for me. I have one guy from our project in
mind who could hack a little C binary for that, but I am unsure if he
likes to do it. If there is something pre-fab, I would prefer it. Can it
even be done with a (combination of) BusyBox applet(s), maybe?
Denys answered again:
You need to build busybox with CONFIG_FEATURE_UNIX_LOCAL=y.
Run the following as intercepting server:
busybox tcpsvd -vvvE local:/tmp/socket 0 ./script.sh
Where script.sh is a simple passthrough connection
to the "original server":
#!/bin/sh
busybox nc -o /tmp/hexdump.$$ local:/tmp/socket1 0
As an example, I added hex logging to file (-o FILE option).
Test it by running an emulated "original server":
busybox tcpsvd -vvvE local:/tmp/socket1 0 sh -c 'echo PID:$$'
and by connecting to "intercepting server":
echo Hello world | busybox nc local:/tmp/socket 0
You should see "PID:19094" message and have a new /tmp/hexdump.19093 file
with the dumped data. Both tcpsvd processes should print some log too
(they are run with -vvv verbosity).
If you need more complex processing, replace nc invocation in script.sh
with a custom program.
I don't think there is anything that will let you cleanly sniff UNIX socket traffic. Here are some options:
Arrange for the sender process to connect to a different socket where you are listening. Also connect to the original socket as a client. On receipt of data, notice the data you want to notice and also pass everything along to the original socket.
Monitor the system for IP address changes yourself using a netlink socket (RTM_NEWADDR, RTM_NEWLINK, etc...).
Run ip monitor as an external process and take action when it writes messages about added & removed IP addresses on its standard output.
I am running a local instance of HTTP::Daemon using a modified version of the looping structure outlined in the documentation. I have made it possible to exit the loop at the user's request, but a subsequent execution of my Perl script gives me the error:
HTTP::Daemon: Address already in use ...propagated at /path/to/script line NNN, line 3.
What more must I do to be a good citizen and clean up after my Daemon?
Most likely nothing. The address is in use by leftover connections from the previous instance. As soon as they are all shut down, the address will be automatically released.
If you want to speed up this process, you can set the SO_REUSEADDR socket option before binding. See the PERL socket documentation for more details. "if a server dies without outstanding connections the port won't be immediately reusable unless you use the option SO_REUSEADDR using setsockopt() function."
I have been trying to write a script to start/stop a service with svcadm. But what I do not understand is how do I get the pid of the process executed into /var/run/myprocess.pid? What is not clear to me is that I can not find anything on other scripts in /lib/svc/method that writes to /var/run. Does this mean that I have to explicitly extract the target location of the pidfile from an environment variable, let my program query for itself and write code to put the pid in the /var/run/myprocess.pid file?
The pid file is to be created by the daemon binary itself, not by service scripts. If your code need to be portable to non Solaris 10+ OSes, you might use defines like this:
http://src.opensolaris.org/source/xref/amd/ibs-gate/usr/src/cmd/ipf/tools/ipmon.c#130