command oriented linux drivers - linux-device-driver

How can i make my driver to act on following commands cat & echo.
Does cat call read() system call of a device driver ?
Does echo call write() system call of a device driver ?
I want to implement these two command cat & echo for my driver controlling 8 led.
If i echo it gives glows led 3 :-----
echo "3=1" > /dev/led_node
If i cat it gives following out-put status :-----
cat /dev/led_node
0 0
1 0
2 0
3 1
4 0
5 0
6 0
7 0
Please suggest what part or system call of driver interact with cat & echo system calls ?

You can easily check how cat reads files using, for exampl, strace. Here is an example:
$ echo '123' >/tmp/test.txt
$ strace cat /tmp/test.txt
In the output, you can spot the open() call:
open("/tmp/test.txt", O_RDONLY) = 3
Which returns 3 - a file descriptor associated with /tmp/test.txt. Further down the output, you can see:
read(3, "123\n", 65536) = 4
Which takes file descriptor 3 and reads from it (using a buffer size of 65536 bytes and getting back 4 bytes). basically answers your first question - cat does call read(). You can do the same thing for echo and figure out that it calls write().
In your character device driver, you would have to implement those calls. For a great explanation on how it works along with useful examples, check out Linux Device Drivers, Chapter 3.
Hope it helps. Good Luck!

cat: system call interacts with read function of your driver. echo: system call interacts with write function of your driver. Thing is if you cat on /dev/led_node, device file/node is opened i.e. calling open system call, then read system call is called, keeps looping in the read unless zero is returned (no data present to read) and atlast close system call is called, which close the device node/file.

Related

How do I find the process ID that listens to SOCK_SEQPACKET in C/C++ program

There is a server that starts and listens to a SOCK_SEQPACKET. I think a SOCK_SEQPACKET has a name, say, #Foo. It's per user, so initiated by each Linux user account, and there is only one service per user.
The goal is to make an external tool that forcefully kill the service when something goes wrong. The language is C++.
When I have to kill that service, I use netstat:
$ netstat -lnp | egrep Foo
I would not like to have a dependency on the net-tools package, though. I would like to do this with minimal dependencies on external tools and/or even libraries.
I've tried Google search, and could not find how. I guess I could perhaps read netstat source code to see how they do; I would like to defer that to the last resort, though. I have learned how I could kill the service, using netstat, which gives me dependency on net-tools. The service somehow runs as /proc/self or so. I could visit every process ID in /proc, and see if that looks like the service: see the executable name, etc. However, that is not necessarily sufficient indication to narrow down to that one single process, which uses the #Foo SOCK_SEQPACKET. Now, as non-expert in socket/network programming, I am running out of ideas about how I should proceed.
As #user253751 says, that's not actually a socket type but more of a socket behavior type. As they suggested, it is probably a unix domain socket. Given the '#' sign in it, it is almost certainly using the unix domain socket's "abstract" namespace (which means the socket doesn't have an endpoint directly visible in the filesystem). At the system call level, such endpoints are created with a null byte as the first character (the # is commonly substituted in command line utilities or wherever the socket is surfaced to user space for user convenience). See the unix(7) man page for more details.
Since the socket doesn't appear in the filesystem anywhere, it can be difficult to find the association of process to socket. One way you can find that is through use of the lsof utility -- but that is probably little improvement over the netstat utility.
It appears that the abstract namespace sockets do show up in the /proc/net/unix pseudo-file. In association with that, there is an inode column (second to the last). In conjunction with looking at each process's /proc/<pid>/fd directory, I think you can make the association.
For example, I created a small python program that creates a Unix domain SOCK_SEQPACKET socket and binds it to '#Foo'. That program is here:
import socket, time
sock = socket.socket(socket.AF_UNIX, socket.SOCK_SEQPACKET)
sock.bind('\0Foo') # Note the \0 for the initial null byte
time.sleep(10000) # Just hang around while we do other stuff
In a separate terminal window:
gh $ grep '#Foo' /proc/net/unix
0000000000000000: 00000002 00000000 00000000 0005 01 7733102 #Foo
gh $ sudo bash -c 'for proc in /proc/[0-9]*; do ls -l $proc/fd | grep 7733102 && echo $proc; done'
lrwx------ 1 gh gh 64 Feb 2 17:13 3 -> socket:[7733102]
/proc/339264
gh $ ps -fp 339264
UID PID PPID C STIME TTY TIME CMD
gh 339264 22444 0 17:11 pts/3 00:00:00 python3
gh $ ls -l /proc/339264/fd
total 0
lrwx------ 1 gh gh 64 Feb 2 17:13 0 -> /dev/pts/3
lrwx------ 1 gh gh 64 Feb 2 17:13 1 -> /dev/pts/3
lrwx------ 1 gh gh 64 Feb 2 17:13 2 -> /dev/pts/3
lrwx------ 1 gh gh 64 Feb 2 17:13 3 -> 'socket:[7733102]'
Explaining:
First, we do a grep on /proc/net/unix for the '#Foo' socket. That gives us the inode number 7733102 (second to last column). The inode number is unique to every socket on the system.
Next, as root, we do an ls -l /proc/<pid>/fd for each process on the system and grep for the inode number we found in the previous step. When found, we printed out the /proc/<pid> entry. (On linux, that pseudo-directory shows a symlink to a representation of what each file descriptor open in the process is. For sockets, that representation is always socket:[<inode>].) Here we found the process ID is 339264.
The remaining two steps are just confirming that that is in fact, our python process, and you can see its four open files (the first three [stdin, stdout, stderr] all pointing to its terminal's pseudo-tty, the fourth to the socket).
To make this into a more foolproof program, you'd need to account for the fact that the inode number you found in step 1 could be a substring of some other socket's inode number but that is left as an exercise for the reader. :)

How can I check if a file is a Unix socket using Scala?

I can create a socket locally using:
$ python -c "import socket as s; sock = s.socket(s.AF_UNIX); sock.bind('/mnt/somesocket')"
I can check that the file is a socket at the command-line via the following:
$ test foo.txt
$ echo $?
1
$ test -S somesocket
$ echo $?
0
How can I check if the file is a socket using Scala?
There are multiple options available, but none is perfect...
test will call S_ISSOCK macro as you can see from the code here:
struct stat stat_buf;
...
S_ISSOCK (stat_buf.st_mode)
The macro itself is very simple - just checks if certain bits are set via bitmasks. Relevant code from here:
#define S_IFMT 00170000
...
#define S_IFSOCK 0140000
...
#define S_ISSOCK(m) (((m) & S_IFMT) == S_IFSOCK)
Thus if you would have a stat structure with st_mode value you would be all set. However, that part is tricky.
Java NIO provides some POSIX support with Files and BasicFileAttributes classes. But it does not expose file mode or any test methods for sockets.
If you don't use various types of files you might be ok using isOther:
boolean isOther()
Tells whether the file is something other than a regular file,
directory, or symbolic link.
Otherwise you can try JNI / JNA implementations:
https://github.com/SerCeMan/jnr-fuse/blob/master/src/main/java/ru/serce/jnrfuse/struct/FileStat.java
Is there a Java library of Unix functions?
Finally, the simplest but slowest solution is to call a process (shell) from Scala.

How to prevent the output truncated if the rows of output from the windbg to large?

If the output rows from the windbg command to large ,such as 100k rows, finally the windbg just display thousands of the rows, and most of them would be truncated , so my question is how to prevent the output truncated , or write all of the rows from the output to a local file to keep all of the output rows? the "write Windows text to file" wouldn't helpful.
Not sure if it would help, but .logopen and .logclose commands might be helpful in this case (respectively open and close a log file which keeps a copy of the events and commands from the Debugger Command window).
See also Keeping a Log File in WinDbg.
sometimes simply piping works especially when running cdb and quitting after executing just one command
cdb -c "tc 100;q" calc >> foo.txt
you should have 100 calls lets check
grep -c !.*: foo.txt
256
lets check how many sysenter were done and what were the index of the syscalls
grep sysenter -B 4 foo.txt | grep eax | awk "{print $1}"
eax=000000ea
eax=0000014d
eax=000000fb
we can use the output when the commands run for an infinite amount of time
without having file locked issues
like this
if .logopen .logclose isnt an option
Try to open additional command window with Ctrl+N and execute the long outputed command within it

Executing in background, but limit number of executions

I have a program that performs some operations on a specified log file, flushing to the disk multiple times for each execution. I'm calling this program from a perl script, which lets you specify a directory, and I run this program on all files within the directory. This can take a long time because of all the flushes.
I'd like to execute the program and run it in the background, but I don't want the pipeline to be thousands of executions long. This is a snippet:
my $command = "program $data >> $log";
myExecute("$command");
myExecute basically runs the command using system(), along with some other logging/printing functions. What I want to do is:
my $command = "program $data & >> $log";
This will obviously create a large pipeline. Is there any way to limit how many background executions are present at a time (preferably using &)? (I'd like to try 2-4).
#!/bin/bash
#
# lets call this script "multi_script.sh"
#
#wait until there are less then 4 instances running
#polling with interval 5 seconds
while [ $( pgrep -c program ) -gt 4 ]; do sleep 5; done
/path/to/program "$1" &
Now call it like this:
my $command = "multi_script.sh $data" >> $log;
Your perl script will wait if the bash script waits.
positives:
If a process crashes it will be replaced (the data goes, of course, unprocessed)
Drawbacks:
It is important for your perl script to wait a moment between starting instances
(maybe a sleep period of a second)
because of the latency between invoking the script and passing the while loop test. If you spawn them too quickly (system spamming) you will end up with much more processes than you bargained for.
If you are able to change
my $command = "program $data & >> $log";
into
my $command = "cat $data >>/path/to/datafile";
(or even better: append $data to /path/to/datafile directly from perl )
And when your script is finished that the last line will be:
System("/path/to/quadslotscript.sh");
then I have the script quadslotscript.sh here:
4 execution slots are started and stay until the end
all slots get input from the same datafile
when a slot is ready with processing it will read a new entry to process
until the datafile/queue is empty
no processtable lookup during execution, only when all work is done.
the code:
#!/bin/bash
#use the datafile as a queue where all processes get their input
exec 3< "/path/to/datafile"
#4 seperate processes
while read -u 3 DATA; do "/path/to/program $DATA" >>$log; done &
while read -u 3 DATA; do "/path/to/program $DATA" >>$log; done &
while read -u 3 DATA; do "/path/to/program $DATA" >>$log; done &
while read -u 3 DATA; do "/path/to/program $DATA" >>$log; done &
#only exit when 100% sure that all processes ended
while pgrep "program" &>"/dev/null" ; do wait ; done

Why does bash loop deploy script only seem to work once?

I have a few simple scripts that are intended to daisy chain together to run a specific script on a set of servers all listed out in a file, one per line.
The single server deploy script contains the following:
1 #!/bin/bash
2
3 file=$1
4 host=$2
5
6 scp ${file} ${host}:/tmp/
7 USER=`whoami`
8 ssh -t -t $USER#${host} /tmp/${file}
9
10 ssh "${host}" /bin/rm /tmp/${file}
11 exit
It works fine on a script I have that yum installs tomcat and symlinks hadoop/hbase configs to the shared class directory.
The second major file is deploy-all.sh which is intended to parse a list of hosts and run the deploy script to all of them:
1 #!/bin/bash
2
3 script=$1
4
5 cat dumbo-hosts | while read fileline
6 do
7 echo ${fileline}
8 ./deploy.sh ${script} ${fileline}
9
10 sleep 10
11 done
What happens is that the script runs once, and then the for loop is broken... I got something like the following output:
$ ./deploy-all.sh setup-tomcat.sh
line is hadoop01.myhost
setup-tomcat.sh 100% 455 0.4KB/s 00:00
tcgetattr: Inappropriate ioctl for device
hadoop02.myhost
hadoop03.myhost
hadoop04.myhost
<succesful output of hadoop01 task>
...
Connection to hadoop01.myhost closed.
If I comment out the ssh commands the loop runs succesfully through all 4 hosts so I presume it's something involving stdio getting cut off once the ssh occurs. In addition the tcgatattr error concerns me somewhat.
How can I get around this? What exactly is causing the tcgetattr error(I'm not even sure if it's related)?
Haven't really done much with shell scripts so sorry if I'm missing something really obvious here, any help would be appreciated.
It's a problem with ssh reusing the stdin file descriptor when running as part of the subprocess.
The workaround is to use '-n' when invoking ssh from a non-terminal context.
option=-n
tty -s 2>/dev/null && option=
scp ${file} ${host}:/tmp/
ssh $option -t ${host} /tmp/${file}
ssh $option ${host} rm /tmp/${file}
I solved this by using bash arrays to temporarily store the lines into an array to avoid the stdin interruption... But it feels wrong... If anyone has a better way of getting around this, please let me know.
Here's my solution:
1 #/bin/bash
2
3 #myscript = $1
4 count=0
5
6 declare -a lines
7
8 while read line
9 do
10 lines[$count]=$line
11 ((count++))
12 done < dumbo-hosts
13
14 for i in "${lines[#]}"
15 do
16 echo "$i"
17 ./deploy.sh "$1" "${i}"
18 done