How do I find the process ID that listens to SOCK_SEQPACKET in C/C++ program - sockets

There is a server that starts and listens to a SOCK_SEQPACKET. I think a SOCK_SEQPACKET has a name, say, #Foo. It's per user, so initiated by each Linux user account, and there is only one service per user.
The goal is to make an external tool that forcefully kill the service when something goes wrong. The language is C++.
When I have to kill that service, I use netstat:
$ netstat -lnp | egrep Foo
I would not like to have a dependency on the net-tools package, though. I would like to do this with minimal dependencies on external tools and/or even libraries.
I've tried Google search, and could not find how. I guess I could perhaps read netstat source code to see how they do; I would like to defer that to the last resort, though. I have learned how I could kill the service, using netstat, which gives me dependency on net-tools. The service somehow runs as /proc/self or so. I could visit every process ID in /proc, and see if that looks like the service: see the executable name, etc. However, that is not necessarily sufficient indication to narrow down to that one single process, which uses the #Foo SOCK_SEQPACKET. Now, as non-expert in socket/network programming, I am running out of ideas about how I should proceed.

As #user253751 says, that's not actually a socket type but more of a socket behavior type. As they suggested, it is probably a unix domain socket. Given the '#' sign in it, it is almost certainly using the unix domain socket's "abstract" namespace (which means the socket doesn't have an endpoint directly visible in the filesystem). At the system call level, such endpoints are created with a null byte as the first character (the # is commonly substituted in command line utilities or wherever the socket is surfaced to user space for user convenience). See the unix(7) man page for more details.
Since the socket doesn't appear in the filesystem anywhere, it can be difficult to find the association of process to socket. One way you can find that is through use of the lsof utility -- but that is probably little improvement over the netstat utility.
It appears that the abstract namespace sockets do show up in the /proc/net/unix pseudo-file. In association with that, there is an inode column (second to the last). In conjunction with looking at each process's /proc/<pid>/fd directory, I think you can make the association.
For example, I created a small python program that creates a Unix domain SOCK_SEQPACKET socket and binds it to '#Foo'. That program is here:
import socket, time
sock = socket.socket(socket.AF_UNIX, socket.SOCK_SEQPACKET)
sock.bind('\0Foo') # Note the \0 for the initial null byte
time.sleep(10000) # Just hang around while we do other stuff
In a separate terminal window:
gh $ grep '#Foo' /proc/net/unix
0000000000000000: 00000002 00000000 00000000 0005 01 7733102 #Foo
gh $ sudo bash -c 'for proc in /proc/[0-9]*; do ls -l $proc/fd | grep 7733102 && echo $proc; done'
lrwx------ 1 gh gh 64 Feb 2 17:13 3 -> socket:[7733102]
/proc/339264
gh $ ps -fp 339264
UID PID PPID C STIME TTY TIME CMD
gh 339264 22444 0 17:11 pts/3 00:00:00 python3
gh $ ls -l /proc/339264/fd
total 0
lrwx------ 1 gh gh 64 Feb 2 17:13 0 -> /dev/pts/3
lrwx------ 1 gh gh 64 Feb 2 17:13 1 -> /dev/pts/3
lrwx------ 1 gh gh 64 Feb 2 17:13 2 -> /dev/pts/3
lrwx------ 1 gh gh 64 Feb 2 17:13 3 -> 'socket:[7733102]'
Explaining:
First, we do a grep on /proc/net/unix for the '#Foo' socket. That gives us the inode number 7733102 (second to last column). The inode number is unique to every socket on the system.
Next, as root, we do an ls -l /proc/<pid>/fd for each process on the system and grep for the inode number we found in the previous step. When found, we printed out the /proc/<pid> entry. (On linux, that pseudo-directory shows a symlink to a representation of what each file descriptor open in the process is. For sockets, that representation is always socket:[<inode>].) Here we found the process ID is 339264.
The remaining two steps are just confirming that that is in fact, our python process, and you can see its four open files (the first three [stdin, stdout, stderr] all pointing to its terminal's pseudo-tty, the fourth to the socket).
To make this into a more foolproof program, you'd need to account for the fact that the inode number you found in step 1 could be a substring of some other socket's inode number but that is left as an exercise for the reader. :)

Related

How to simulate "set -o vi" in csh , like we do in ksh?

I used ksh a lot but now moving to csh as this company has all scripts etc written in that.
One thing I loved about ksh was the ease of using the command history.
In case I have to edit the last command or second last command, I could easily press "Escape - k" to cycle through the commands and easily edit them.
csh does not seem to have a good equivalent. It displays all the history of commands then I have to copy paste one of them and then edit.
That's a pain when you want just change the number of in the file name for example :
cat abcdef1 | grep "Linking"
You could use (as a workaround):
"The most commonly used feature of csh command history is "!!" (pronounced "bang bang") which recalls the previous command":
% date
Mon Nov 25 13:52:36 PST 1996
% !!
date
Mon Nov 25 13:52:52 PST 1996
Here some other useful commands.
But you could just work in ksh and make script and whatever in csh.
If it is istalled simply launch:
/bin/ksh

grep command to print follow-up lines after a match

how to use "grep" command to find a match and to print followup of 10 lines from the match. this i need to get some error statements from log files. (else need to download use match for log time and then copy the content). Instead of downloading bulk size files i need to run a command to get those number of lines.
A default install of Solaris 10 or 11 will have the /usr/sfw/bin file tree. Gnu grep - /usr/sfw/bin/ggrep is there. ggrep supports /usr/sfw/bin/ggrep -A 10 [pattern] [file] which does what you want.
Solaris 9 and older may not have it. Or your system may not have been a default install. Check.
Suppose, you have a file /etc/passwd and want to filter user "chetan"
Please try below command:
cat /etc/passwd | /usr/sfw/bin/ggrep -A 2 'chetan'
It will print the line with letter "chetan" and the next two lines as well.
-- Tested in Solaris 10 --

Why does grep hang when run against the / directory?

My question is in two parts :
1) Why does grep hang when I grep all files under "/" ?
for example :
grep -r 'h' ./
(note : right before the hang/crash, I note that I see some "no such device or address" messages , regarding sockets....
Of course, I know that grep shouldn't run against a socket, but I would think that since sockets are just files in Unix, it should return a negative result, rather than crashing.
2) Now, my follow up question : In any case -- how can I grep the whole filesystem? Are there certain *NIX directories which we should leave out when doing this ? In particular, I'm looking for all recently written log files.
As #ninjalj said, if you don't use -D skip, grep will try to read all your device files, socket files, and FIFO files. In particular, on a Linux system (and many Unix systems), it will try to read /dev/zero, which appears to be infinitely long.
You'll be waiting for a while.
If you're looking for a system log, starting from /var/log is probably the best approach.
If you're looking for something that really could be anywhere in your file system, you can do something like this:
find / -xdev -type f -print0 | xargs -0 grep -H pattern
The -xdev argument to find tells it to stay within a single filesystem; this will avoid /proc and /dev (as well as any mounted filesystems). -type f limits the search to ordinary files. -print0 prints the file names separated by null characters rather than newlines; this avoid problems with files having spaces or other funny characters in their names.
xargs reads a list of file names (or anything else) on its standard input and invokes the specified command on everything in the list. The -0 option works with find's -print0.
The -H option to grep tells it to prefix each match with the file name. By default, grep does this only if there are two or more file names on its command line. Since xargs splits its arguments into batches, it's possible that the last batch will have just one file, which would give you inconsistent results.
Consider using find ... -name '*.log' to limit the search to files with names ending in .log (assuming your log files have such names), and/or using grep -I ... to skip binary files.
Note that all this depends on GNU-specific features. Some of these options might not be available on MacOS (which is based on BSD) or on other Unix systems. Consult your local documentation, and consider installing GNU findutils (for find and xargs) and/or GNU grep.
Before trying any of this, use df to see just how big your root filesystem is. Mine is currently 268 gigabytes; searching all of it would probably take several hours. A few minutes spent (a) restricting the files you search and (b) making sure the command is correct will be well worth the time you spend.
By default, grep tries to read every file. Use -D skip to skip device files, socket files and FIFO files.
If you keep seeing error messages, then grep is not hanging. Keep iotop open in a second window to see how hard your system is working to pull all the contents off its storage media into main memory, piece by piece. This operation should be slow, or you have a very barebones system.
Now, my follow up question : In any case -- how can I grep the whole filesystem? Are there certain *NIX directories which we should leave out when doing this ? In particular, Im looking for all recently written log files.
Grepping the whole FS is very rarely a good idea. Try grepping the directory where the log files should have been written; likely /var/log. Even better, if you know anything about the names of the files you're looking for (say, they have the extension .log), then do a find or locate and grep the files reported by those programs.

How to read a block in a storage pool (zpool) using dd?

I want to read a block in zpool storage pool using dd command. Since zpool doesn't create a device file like other volume manager like vxvm. I dunno which block device to use for reading. Is there any way to read block by block data in zpool ?
You can probably use the zdb command. Here is a pdf about it, and the help output.
http://www.bruningsystems.com/osdevcon_draft3.pdf
# zdb --help
zdb: illegal option -- -
Usage: zdb [-CumdibcsDvhL] poolname [object...]
zdb [-div] dataset [object...]
zdb -m [-L] poolname [vdev [metaslab...]]
zdb -R poolname vdev:offset:size[:flags]
zdb -S poolname
zdb -l [-u] device
zdb -C
Dataset name must include at least one separator character '/' or '#'
If dataset name is specified, only that dataset is dumped
If object numbers are specified, only those objects are dumped
Options to control amount of output:
-u uberblock
-d dataset(s)
-i intent logs
-C config (or cachefile if alone)
-h pool history
-b block statistics
-m metaslabs
-c checksum all metadata (twice for all data) blocks
-s report stats on zdb's I/O
-D dedup statistics
-S simulate dedup to measure effect
-v verbose (applies to all others)
-l dump label contents
-L disable leak tracking (do not load spacemaps)
-R read and display block from a device
Below options are intended for use with other options (except -l):
-A ignore assertions (-A), enable panic recovery (-AA) or both (-AAA)
-F attempt automatic rewind within safe range of transaction groups
-U <cachefile_path> -- use alternate cachefile
-X attempt extreme rewind (does not work with dataset)
-e pool is exported/destroyed/has altroot/not in a cachefile
-p <path> -- use one or more with -e to specify path to vdev dir
-P print numbers parsable
-t <txg> -- highest txg to use when searching for uberblocks
Specify an option more than once (e.g. -bb) to make only that option verbose
Default is to dump everything non-verbosely
Unfortunately, I don't know how to use it.
# zdb
tank:
version: 28
name: 'tank'
...
vdev_tree:
...
children[0]:
...
children[0]:
...
path: '/dev/label/bank1d1'
phys_path: '/dev/label/bank1d1'
...
So I took the array indexes 0 0 to get my first disk (bank1d1) and did this command. It did something. I don't know how to read the output.
zdb -R tank 0:0:4e00:200 | strings
Have fun... try not to destroy anything. Here is your warning from the man page:
The zdb command is used by support engineers to diagnose failures and
gather statistics. Since the ZFS file system is always consistent on
disk and is self-repairing, zdb should only be run under the direction
by a support engineer.
And please tell us what you actually were looking for. Was Alan right that you wanted to do backups?
You can read from underlying raw devices in the pool, but as far as I can tell there's no concept of single contiguous block device representing the whole pool.
The pool in ZFS is not a single contiguous block of sectors that 'classic' volume managers are. ZFS internal structure is closer to a tree which would be somewhat challenging to represent as a flat array of blocks.
Ben Rockwood's blog post "zdb: Examining ZFS At Point-Blank Range" may help getting better idea of what's under the hood.
No idea about what might be useful doing so but you certainly can read blocks in the underlying devices used by the pool. They are shown by the zpool status command. If you are really asking about zvols instead of zpools, they are accessible under /dev/zvol/rdsk/pool-name/zvol-name. If you want to look at internal zpool data, you probably want to use zdb.
If you want to backup ZFS filesystems you should be using the following tools:
'zfs snapshot' to create a stable snapshot of the filesystem
'zfs send' to send a copy of the snapshot to somewhere else
'zfs receive' to go back from a snapshot to a filesystem.
'dd' is almost certainly not the tool you should be using. In your case you could 'zfs send' and redirect the output into a file on your other filesystem.
See chapter 7 of the ZFS administration guide for more details.

How do I search a CVS repository for a particular file?

Is there any way to do it? I only have client access and no access to the server. Is there a command I've missed or some software that I can install locally that can connect and find a file by filename?
You could grep the output of
cvs rlog -Nh .
(note the period character at the end - this effectively means: the whole repository).
That should give you info about the whole shebang including removed files and files added on branches.
You can use
cvs rls -Rde <modulename>
which will give you all files in recursively, e.g.
foo:
/x.py/1.2/Mon Dec 1 23:33:51 2008//
/y.py/1.1/Mon Dec 1 23:33:31 2008//
D/bar////
foo/bar:
/xxx/1.1/Mon Dec 1 23:36:38 2008//
Notice that the -d option gives you also deleted files; not sure whether you
wanted that. Without -e, it only gives you the file names.