Setuid with GTK+ - gtk

I'm trying to write a program and integrate it with gui built with Gtk+. The exe that is to be called by the gui however has the setuid bit set. However gtk does not allow this exe to run as specified by the gtk community. They however say that we have to write separate helper programs and all. I really dont understand what that means. Can anyone please shed some light on how to overcome this problem? I really need an immediate solution.

First question: why is your program setuid? Writing setuid programs is not a game that should be played by self-professed Linux newbies. They're dangerous. They're useful - do not get me wrong. But they are dangerous and difficult to write securely.
The GTK+ project states their view on setuid programs very forthrightly at 'GTK+ - Using setuid'. They give their reasons - good ones. They indicate how to avoid problems:
In the opinion of the GTK+ team, the only correct way to write a setuid program with a graphical user interface is to have a setuid backend that communicates with the non-setuid graphical user interface via a mechanism such as a pipe and that considers the input it receives to be untrusted.
Since you're supposed to write a helper program, have you looked for examples? It is likely that they're given. Is your program itself a GUI application?
I need root privileges [...] to open some peripheral devices, read the data available in their memory, and then close them...this cannot be done without root perms...also the data read is processed and displayed simultaneously using GTK.
So, this is exactly the sort of scenario that the GTK+ team describe. You need a small setuid root program that is launched by your GUI, and that is connected to it by pipes or a Unix-domain socket, or some similar technique.
When you need data from the peripheral, your main application writes a request to the daemon/helper and then waits for a response containing the data.
In outline, you will have code in your GUI to:
LaunchDaemon(): this will create the plumbing (pipes or socket), fork, and the child will sort out the file descriptors (closing what it does not need) before launching the daemon process.
RequestDaemon(): this will package up a request to the daemon/helper, writing the information to the daemon, and reading back the response.
TerminateDaemon(): this will close the connections to the daemon/helper; it will know that it has no more work to do and will exit.
Meanwhile, your daemon/helper program will:
Settle into a nice comfy loop that:
reads a request from standard input
checks it for validity
executes the request
formats a response (error, or normal)
writes that back to the main GUI
repeats
When it gets EOF from the input, it terminates.
If at all possible, it will open the device once, and then drop root privileges.
This minimizes the exposure to attack.
If the program is no longer running as root, it cannot be abused into doing things that only root can do.
Once a file is open, the permissions are not checked again (so the daemon running as root can open the file, and then throw away its root privileges, if it won't reopen the file).
You should still look at whether the permissions on the peripheral are correct - or why you are needing to read data from something that only root is supposed to be able to read from.

I think that the GTK+ team's heart is in the right place when they warn here against using GTK+ in setuid programs. But I have two observations, and a workaround.
First, it is one thing to warn against such a practice, and another thing entirely to make such a practice seemingly impossible. It irritates me to think of designers who say "There is no valid reason for users to do XXX", and then go out of their way to make XXX impossible. Warn of the risk, and let the user take the risk.
Second, the GTK+ team confuses "setuid" with "setuid root". Here's an example of where the distinction is important. In this example, I want not to expand the privileges of a program using GTK+, but to reduce them. Under certain circumstances, I want to be able to run Firefox (well, iceweasel, but it's basically the same) crippled so it can look at only local files, with no network capability. So I've set up iptables in my Linux system so that a particular (artificially created) user has no access to the outside world. I want to be able to run Firefox as that user, no matter which user I actually am. Assuming that the restricted user's uid and gid are 1234, the following general idea will work. Build it as setuid root. Hope this helps.
EDIT 2014-02-22 15:13 UTC
I neglected to mention that you can substitute 0 for each 1234, and you've got root. One could argue that this would be a totally bad idea, and I guess I can understand that point.
#include <sys/types.h>
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
int
main(int ar_argc,
char **ar_argv
)
{
setenv("HOME","/u/wally",1);
/* Set other environment variables as appropriate. */
if(setgid(1234))
{
fprintf(stderr,"setgid() fail\n");
exit(1);
};
if(setuid(1234))
{
fprintf(stderr,"setuid() fail\n");
exit(1);
};
/* Use execl() and friends, or system(), to do what you want here. */
return 0;
}

Often, it is better to set up the system such that the device files can be opened by a non-root user, and then let normal non-root processes talk to them.

Related

What does sys_vm86old syscall do?

My question is quite simple.
I encountered this sys_vm86old syscall (when reverse engineering) and I am trying to understand what it does.
I found two sources that could give me something but I'm still not sure that I fully understand; these sources are
The Source Code and this page which gives me this paragraph (but it's more readable directly on the link):
config GRKERNSEC_VM86
bool "Restrict VM86 mode"
depends on X86_32
help:
If you say Y here, only processes with CAP_SYS_RAWIO will be able to
make use of a special execution mode on 32bit x86 processors called
Virtual 8086 (VM86) mode. XFree86 may need vm86 mode for certain
video cards and will still work with this option enabled. The purpose
of the option is to prevent exploitation of emulation errors in
virtualization of vm86 mode like the one discovered in VMWare in 2009.
Nearly all users should be able to enable this option.
From what I understood, it would ensure that the calling process has cap_sys_rawio enabled. But this doesn't help me a lot...
Can anybody tell me ?
Thank you
The syscall is used to execute code in VM86 mode. This mode allows you to run old "real mode" 32bit code (like present in some BIOS) inside a protected mode OS.
See for example the Wikipedia article on it: https://en.wikipedia.org/wiki/Virtual_8086_mode
The setting you found means you need CAP_SYS_RAWIO to invoke the syscall.
I think X11 especially is using it to call BIOS methods for switching the video mode. There are two syscalls, the one with old suffix offers less operations but is retained for binary (ABI) compatibility.

Monitoring a directory in Cocoa/Cocoa Touch

I am trying to find a way to monitor the contents of a directory for changes. I have tried two approaches.
Use kqueue to monitor the directory
Use GCD to monitor the directory
The problem I am encountering is that I can't find a way to detect which file has changed. I am attempting to monitor a directory with potentially thousands of files in it and I do not want to call stat on every one of them to find out which ones changed. I also do not want to set up a separate dispatch source for every file in that directory. Is this currently possible?
Note: I have documented my experiments monitoring files with kqueue and GCD
My advice is to just bite the bullet and do a directory scan in another thread, even if you're talking about thousands of files. But if you insist, here's the answer:
There's no way to do this without rolling up your sleeves and going kernel-diving.
Your first option is to use the FSEvents framework, which sends out notifications when a file is created, edited or deleted (as well as things to do with attributes). Overview is here, and someone wrote an Objective C wrapper around the API, although I haven't tried it. But the overview doesn't mention the part about events surrounding file changes, just directories (like with kqueue). I ended up using the code from here along with the header file here to compile my own logger which I could use to get events at the individual file level. You'd have to write some code in your app to run the logger in the background and monitor it.
Alternatively, take a look at the "fs_usage" command, which constantly monitors all filesystem activity (and I do mean all). This comes with Darwin already, so you don't have to compile it yourself. You can use kqueue to listen for directory changes, while at the same time monitoring the output from "fs_usage". If you get a notification from kqueue that a directory has changed, you can look at the output from fs_usage, see which files were written to, and check the filenames against the directory that was modified. fs_usage is a firehose, so be prepared to use some options along with Grep to tame it.
To make things more fun, both your FSEvents logger and fs_usage require root access, so you'll have to get authorization from the user before you can use them in your OS X app (check out the Authorization Services Programming Guide for info on how to do it).
If this all sounds horribly complicated, that's because it is. But at least you didn't have to find out the hard way!

Recommended communication pattern for web frontend of command line app

I have a perl app which processes text files from the local filesystem (think about it as an overly-complicated grep).
I want to design a webapp which allows remote users to invoke the perl app by setting the required parameters.
Once it's running it would be desirable some sort of communication between the perl app and the webapp about the status of the process (running, % done, finished).
Which would be a recommended way of communication between the two processes? I was thinking in a database table, but I'm not really sure it's a good idea.
any suggestions are appreciated.
Stackers, go ahead and edit this answer to add code examples or links to them.
DrNoone, two approaches come to mind.
callback
Your greppy app needs to offer a callback function that returns the status and which is periodically called by the Web app.
event
This makes sense if you are already using a Web server/app framework which exposes an event loop usable from external applications (rather unlikely in Perl land). The greppy app fires events on status changes and the Web app attaches/listens to them and acts accordingly.
For IPC as you envision it, a plain database is not so suitable. Look into message queues instead. For great interop, pick AMPQ compliant implementation.
If you run the process using open($handle, "cmd |") you can read the results in real time and print them straight to STDOUT while your response is open. That's probably the simplest approach.

Perl scripts, to use forks or threads?

I am writing a couple fo scripts that go and collect data from a number of servers, the number will grow and im trynig to future proof my scripts, but im a little stuck.
so to start off with I have a script that looks up an IP in a mysql database and then connects to each server grabs some information and then puts it into the database again.
What i have been thinknig is there is a limited amount of time to do this and if i have 100 servers it will take a little bit of time to go out to each server get the information and then push it to a db. So I have thought about either using forks or threads in perl?
Which would be the prefered option in my situation? And hs anyone got any examples?
Thanks!
Edit: Ok so a bit more inforamtion needed: Im running on Linux, and what I thought was i could get the master script to collect the db information, then send off each sub process / task to connect and gather information then push teh information back to the db.
Which is best depends a lot on your needs; but for what it's worth here's my experience:
Last time I used perl's threads, I found it was actually slower and more problematic for me than forking, because:
Threads copied all data anyway, as a thread would, but did it all upfront
Threads didn't always clean up complex resources on exit; causing a slow memory leak that wasn't acceptable in what was intended to be a server
Several modules didn't handle threads cleanly, including the database module I was using which got seriously confused.
One trap to watch for is the "forks" library, which emulates "threads" but uses real forking. The problem I faced here was many of the behaviours it emulated were exactly what I was trying to get away from. I ended up using a classic old-school "fork" and using sockets to communicate where needed.
Issues with forks (the library, not the fork command):
Still confused the database system
Shared variables still very limited
Overrode the 'fork' command, resulting in unexpected behaviour elsewhere in the software
Forking is more "resource safe" (think database modules and so on) than threading, so you might want to end up on that road.
Depending on your platform of choice, on the other hand, you might want to avoid fork()-ing in Perl. Quote from perlfork(1):
Perl provides a fork() keyword that
corresponds to the Unix system call of
the same name. On most Unix-like
platforms where the fork() system call
is available, Perl's fork() simply
calls it.
On some platforms such as Windows
where the fork() system call is not
available, Perl can be built to
emulate fork() at the interpreter
level. While the emulation is
designed to be as compatible as
possible with the real fork() at the
level of the Perl program, there are
certain important differences that
stem from the fact that all the pseudo
child "processes" created this way
live in the same real process as far
as the operating system is concerned.

How can I prevent Windows from catching my Perl exceptions?

I have this Perl software that is supposed to run 24/7. It keeps open a connection to an IMAP server, checks for new mail and then classifies new messages.
Now I have a user that is hibernating his XP laptop every once in a while. When this happens, the connection to the server fails and an exception is triggered. The calling code usually catches that exception and tries to reconnect. But in this case, it seems that Windows (or Perl?) is catching the exception and delivering it to the user via a message box.
Anyone know how I can prevent that kind of wtf? Could my code catch a "system-is-about-to-hibernate" signal?
To clear up some points you already raised:
I have no problem with users hibernating their machines. I just need to find a way to deal with that.
The Perl module in question does throw an exception. It does something like "die 'foo bar'. Although the application is completely browser based and doesn't use anything like Wx or Tk, the user gets a message box titled "poll_timer". The content of that message box is exactly the contents of $# ('foo bar' in this example).
The application is compiled into an executable using perlapp. The documentation doesn't mention anything about exception handling, though.
I think that you're dealing with an OS-level exception, not something thrown from Perl. The relevant Perl module is making a call to something in a DLL (I presume), and the exception is getting thrown. Your best bet would be to boil this down to a simple, replicable test case that triggers the exception (you might have to do a lot of hibernating and waking the machines involved for this process). Then, send this information to the module developer and ask them if they can come up with a means of catching this exception in a way that is more useful for you.
If the module developer can't or won't help, then you'll probably wind up needing to use the Perl debugger to debug into the module's code and see exactly what is going on, and see if there is a way you can change the module yourself to catch and deal with the exception.
It's difficult to offer intelligent suggestions without seeing relevant bits of code. If you're getting a dialog box with an exception message the program is most likely using either the Tk or wxPerl GUI library, which may complicate things a bit. With that said, my guess would be that it would be pretty easy to modify the exception handling in the program by wrapping the failure point in an eval block and testing $# after the call. If $# contains an error message indicating connection failure, then re-establish the connection and go on your way.
Your user is not the exception but rather the rule. My laptop is hibernated between work and home. At work, it is on on DHCP network; at home, it is on another altogether. Most programs continue to work despite a confusing multiplicity of IP addresses (VMWare, VPN, plain old connection via NAT router). Those that don't (AT&T Net Client, for the VPN - unused in the office, necessary at home or on the road) recognize the disconnect at hibernate time (AT&T Net Client holds up the StandBy/Hibernate process until it has disconnected), and I re-establish the connection if appropriate when the machine wakes up. At airports, I use the local WiFi (more DHCP) but turn of the wireless altogether (one physical switch) before boarding the plane.
So, you need to find out how to learn that the machine is going into StandBy or Hibernation mode for your software to be usable. What I don't have, I'm sorry to say, is a recipe for what you need to do.
Some work with Google suggests that ACPI (Advanced Configuration and Power Interface) is part of the solution (Microsoft). APM (Advanced Power Management) may also be relevant.
I've found a hack to avoid modal system dialog boxes for hard errors (e.g. "encountered and exception and needs to close"). I don't know if the same trick will work for this kind of error you're describing, but you could give it a try.
See: Avoiding the “encountered a problem and needs to close” dialog on Windows
In short, set the
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Windows\ErrorMode
registry key to the value “2″.