How to monitor a system call? - system-calls

I have a problem, where an unknown process (in Linux) changes the permissions on certain files, and I am looking for a way to figure which process it may be. Ideally there would be a way to log all calls to a specific system call, with the PID, program name and timestamp; I imagine one could do this by hooking into the system call(s) and extracting the information needed before passing the call on to the kernel. Is there any such tool?

From Server Fault question: Unix filesystem hook
answerd by Steven Monday
A tool call icron might help you.

Related

macOS : programmatic check if process runs as a launchDaemon or launchAgent or from command-line

I'd like to get an indication about the context in which my process is running from. I'd like to distinguish between the following cases :
It runs as a persistent scheduled task (launchDaemon/launchAgent)
It was called on-demand and created by launchd using open command-line or double-click.
It was called directly from command-line terminal (i.e. > /bin/myProg from terminal )
Perhaps is there any indication about the process context using Objective-c/swift framework or any other way ? I wish to avoid inventing the wheel here :-)
thanks
There is definetely no simple public API or framework for doing this, and doing this is hard.
Some parts of this info possibly could be retreived by your process itslef with some side-ways which will work on some system versions:
There is a launchctl C-based API, which you can try to use to enumerate all
launch daemon/agent tasks and search for your app path/pid. You may
require a root rights for your process for doing this.
Using open command-line sometimes could be traced with environment
variables it sets for your process.
Running directly from command-line could leave responsible_pid filled correctly (which is private API from libquarantine, unless you are observing it with Endpoint Security starting from 11.smth version)
All this things, except launchctl API, are not public, not reliable, could be broken at any time by Apple, and may be not sufficient for your needs.
But it is worth to take them a try, because there is nothing better :)
You could potentially distinguish all cases you want using system events monitoring from some other (root-permitted) process you control, possibly adopting Endpoint Security Framework (requires an entitlement from Apple, can't be distributed via AppStore), calling a lot of private APIs and a doing bunch of reversing tricks.
The open resource I could suggest on this topic is here

Monitoring a directory in Cocoa/Cocoa Touch

I am trying to find a way to monitor the contents of a directory for changes. I have tried two approaches.
Use kqueue to monitor the directory
Use GCD to monitor the directory
The problem I am encountering is that I can't find a way to detect which file has changed. I am attempting to monitor a directory with potentially thousands of files in it and I do not want to call stat on every one of them to find out which ones changed. I also do not want to set up a separate dispatch source for every file in that directory. Is this currently possible?
Note: I have documented my experiments monitoring files with kqueue and GCD
My advice is to just bite the bullet and do a directory scan in another thread, even if you're talking about thousands of files. But if you insist, here's the answer:
There's no way to do this without rolling up your sleeves and going kernel-diving.
Your first option is to use the FSEvents framework, which sends out notifications when a file is created, edited or deleted (as well as things to do with attributes). Overview is here, and someone wrote an Objective C wrapper around the API, although I haven't tried it. But the overview doesn't mention the part about events surrounding file changes, just directories (like with kqueue). I ended up using the code from here along with the header file here to compile my own logger which I could use to get events at the individual file level. You'd have to write some code in your app to run the logger in the background and monitor it.
Alternatively, take a look at the "fs_usage" command, which constantly monitors all filesystem activity (and I do mean all). This comes with Darwin already, so you don't have to compile it yourself. You can use kqueue to listen for directory changes, while at the same time monitoring the output from "fs_usage". If you get a notification from kqueue that a directory has changed, you can look at the output from fs_usage, see which files were written to, and check the filenames against the directory that was modified. fs_usage is a firehose, so be prepared to use some options along with Grep to tame it.
To make things more fun, both your FSEvents logger and fs_usage require root access, so you'll have to get authorization from the user before you can use them in your OS X app (check out the Authorization Services Programming Guide for info on how to do it).
If this all sounds horribly complicated, that's because it is. But at least you didn't have to find out the hard way!

Recommended communication pattern for web frontend of command line app

I have a perl app which processes text files from the local filesystem (think about it as an overly-complicated grep).
I want to design a webapp which allows remote users to invoke the perl app by setting the required parameters.
Once it's running it would be desirable some sort of communication between the perl app and the webapp about the status of the process (running, % done, finished).
Which would be a recommended way of communication between the two processes? I was thinking in a database table, but I'm not really sure it's a good idea.
any suggestions are appreciated.
Stackers, go ahead and edit this answer to add code examples or links to them.
DrNoone, two approaches come to mind.
callback
Your greppy app needs to offer a callback function that returns the status and which is periodically called by the Web app.
event
This makes sense if you are already using a Web server/app framework which exposes an event loop usable from external applications (rather unlikely in Perl land). The greppy app fires events on status changes and the Web app attaches/listens to them and acts accordingly.
For IPC as you envision it, a plain database is not so suitable. Look into message queues instead. For great interop, pick AMPQ compliant implementation.
If you run the process using open($handle, "cmd |") you can read the results in real time and print them straight to STDOUT while your response is open. That's probably the simplest approach.

Setuid with GTK+

I'm trying to write a program and integrate it with gui built with Gtk+. The exe that is to be called by the gui however has the setuid bit set. However gtk does not allow this exe to run as specified by the gtk community. They however say that we have to write separate helper programs and all. I really dont understand what that means. Can anyone please shed some light on how to overcome this problem? I really need an immediate solution.
First question: why is your program setuid? Writing setuid programs is not a game that should be played by self-professed Linux newbies. They're dangerous. They're useful - do not get me wrong. But they are dangerous and difficult to write securely.
The GTK+ project states their view on setuid programs very forthrightly at 'GTK+ - Using setuid'. They give their reasons - good ones. They indicate how to avoid problems:
In the opinion of the GTK+ team, the only correct way to write a setuid program with a graphical user interface is to have a setuid backend that communicates with the non-setuid graphical user interface via a mechanism such as a pipe and that considers the input it receives to be untrusted.
Since you're supposed to write a helper program, have you looked for examples? It is likely that they're given. Is your program itself a GUI application?
I need root privileges [...] to open some peripheral devices, read the data available in their memory, and then close them...this cannot be done without root perms...also the data read is processed and displayed simultaneously using GTK.
So, this is exactly the sort of scenario that the GTK+ team describe. You need a small setuid root program that is launched by your GUI, and that is connected to it by pipes or a Unix-domain socket, or some similar technique.
When you need data from the peripheral, your main application writes a request to the daemon/helper and then waits for a response containing the data.
In outline, you will have code in your GUI to:
LaunchDaemon(): this will create the plumbing (pipes or socket), fork, and the child will sort out the file descriptors (closing what it does not need) before launching the daemon process.
RequestDaemon(): this will package up a request to the daemon/helper, writing the information to the daemon, and reading back the response.
TerminateDaemon(): this will close the connections to the daemon/helper; it will know that it has no more work to do and will exit.
Meanwhile, your daemon/helper program will:
Settle into a nice comfy loop that:
reads a request from standard input
checks it for validity
executes the request
formats a response (error, or normal)
writes that back to the main GUI
repeats
When it gets EOF from the input, it terminates.
If at all possible, it will open the device once, and then drop root privileges.
This minimizes the exposure to attack.
If the program is no longer running as root, it cannot be abused into doing things that only root can do.
Once a file is open, the permissions are not checked again (so the daemon running as root can open the file, and then throw away its root privileges, if it won't reopen the file).
You should still look at whether the permissions on the peripheral are correct - or why you are needing to read data from something that only root is supposed to be able to read from.
I think that the GTK+ team's heart is in the right place when they warn here against using GTK+ in setuid programs. But I have two observations, and a workaround.
First, it is one thing to warn against such a practice, and another thing entirely to make such a practice seemingly impossible. It irritates me to think of designers who say "There is no valid reason for users to do XXX", and then go out of their way to make XXX impossible. Warn of the risk, and let the user take the risk.
Second, the GTK+ team confuses "setuid" with "setuid root". Here's an example of where the distinction is important. In this example, I want not to expand the privileges of a program using GTK+, but to reduce them. Under certain circumstances, I want to be able to run Firefox (well, iceweasel, but it's basically the same) crippled so it can look at only local files, with no network capability. So I've set up iptables in my Linux system so that a particular (artificially created) user has no access to the outside world. I want to be able to run Firefox as that user, no matter which user I actually am. Assuming that the restricted user's uid and gid are 1234, the following general idea will work. Build it as setuid root. Hope this helps.
EDIT 2014-02-22 15:13 UTC
I neglected to mention that you can substitute 0 for each 1234, and you've got root. One could argue that this would be a totally bad idea, and I guess I can understand that point.
#include <sys/types.h>
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
int
main(int ar_argc,
char **ar_argv
)
{
setenv("HOME","/u/wally",1);
/* Set other environment variables as appropriate. */
if(setgid(1234))
{
fprintf(stderr,"setgid() fail\n");
exit(1);
};
if(setuid(1234))
{
fprintf(stderr,"setuid() fail\n");
exit(1);
};
/* Use execl() and friends, or system(), to do what you want here. */
return 0;
}
Often, it is better to set up the system such that the device files can be opened by a non-root user, and then let normal non-root processes talk to them.

Perl scripts, to use forks or threads?

I am writing a couple fo scripts that go and collect data from a number of servers, the number will grow and im trynig to future proof my scripts, but im a little stuck.
so to start off with I have a script that looks up an IP in a mysql database and then connects to each server grabs some information and then puts it into the database again.
What i have been thinknig is there is a limited amount of time to do this and if i have 100 servers it will take a little bit of time to go out to each server get the information and then push it to a db. So I have thought about either using forks or threads in perl?
Which would be the prefered option in my situation? And hs anyone got any examples?
Thanks!
Edit: Ok so a bit more inforamtion needed: Im running on Linux, and what I thought was i could get the master script to collect the db information, then send off each sub process / task to connect and gather information then push teh information back to the db.
Which is best depends a lot on your needs; but for what it's worth here's my experience:
Last time I used perl's threads, I found it was actually slower and more problematic for me than forking, because:
Threads copied all data anyway, as a thread would, but did it all upfront
Threads didn't always clean up complex resources on exit; causing a slow memory leak that wasn't acceptable in what was intended to be a server
Several modules didn't handle threads cleanly, including the database module I was using which got seriously confused.
One trap to watch for is the "forks" library, which emulates "threads" but uses real forking. The problem I faced here was many of the behaviours it emulated were exactly what I was trying to get away from. I ended up using a classic old-school "fork" and using sockets to communicate where needed.
Issues with forks (the library, not the fork command):
Still confused the database system
Shared variables still very limited
Overrode the 'fork' command, resulting in unexpected behaviour elsewhere in the software
Forking is more "resource safe" (think database modules and so on) than threading, so you might want to end up on that road.
Depending on your platform of choice, on the other hand, you might want to avoid fork()-ing in Perl. Quote from perlfork(1):
Perl provides a fork() keyword that
corresponds to the Unix system call of
the same name. On most Unix-like
platforms where the fork() system call
is available, Perl's fork() simply
calls it.
On some platforms such as Windows
where the fork() system call is not
available, Perl can be built to
emulate fork() at the interpreter
level. While the emulation is
designed to be as compatible as
possible with the real fork() at the
level of the Perl program, there are
certain important differences that
stem from the fact that all the pseudo
child "processes" created this way
live in the same real process as far
as the operating system is concerned.