I have a file (ex. C:\temp\afile.txt) in which a Windows service has an open file handle on. After stopping the process, the file handle remains open. I would like to be able to find and delete this handle simply provided the file name and path with a Perl script. Is this possible? Thank you for your time.
It is possible to locate which process hold a file handle open and to reach into the process and kill the handle because MS's Process Explorer can do just this. How? I don't know.
You should probably use MoveFileEx(file_name, NULL, MOVEFILE_DELAY_UNTIL_REBOOT) instead. This causes the file to be deleted the next time the system is rebooted.
Win32API::File provides a Perl interface to that system call.
Related
i am trying to get the current working directory path using Perl
when i execute from ubuntu: $root#ubuntu:/var/test/geek# firefox http:/localhost/test.html, i get /var/cgi-bin as output in perl cgi page instead of /var/test/geek.
used perl code:
my $pwd=cwd();
bla bla
print "<h1> pwd </h1>";
above code gives path of test.pl not users working directory path
Edit: When i run the script alone from the terminal it works fine. for example:
$root#ubuntu:/var/test/geek# /var/cgi-bin/test.pl
i get /var/test/geek. but when i call the script in html page using submit button it gives path of perl script.
Each process has its own working directory that it inherits from its parent when it gets created.
cwd() returns the current process's working directory.
For a CGI script, the browser doesn't pass its working directory to the server as part of the request. To obtain that, you need to have code running on the client system that submits it. That might be an application that the user download, or possibly, but unlikely, some in-browser code, like Javascript / a Java applet (This info is likely hidden from in-browser code for security reasons though).
(The rest assumes Linux, it will likely differ on other operating systems)
The part below assumes that you are looking for the working directory of a user on the server:
In order to get a specific shell for a specific user's working directory, you would need to identify the PID for the shell and get the working directory from the /proc/<pid>/cwd symlink (To read these, the process must belong to the user running the code, or the code must run as root (Which is a bad idea for a CGI script)...). To get the PID of the shell, you likely need to start from the w command output, or its data source, /var/run/utmp. Sys::Utmp might be useful for this... You might then also need to retreive a whole lot of extra info to find all the processes that might have the working directory that you are looking for.
I think you are mixing the web server and the local user. The web server has a working directory when you run the script, and that is the one that cwd() returns.
I have an error with perl while trying to CREATE a file called .envfile in the root dir / (only for UNIX). Permission denied, which is understood. But, is there a way to write this file? I need to do it without any modules, just with a built-in functions. I expect for using chmod, but... honestly, have no idea of how to implement it in the same thread SAFELY.
I need this file to write in it my own ENVs for my software (as it is a big project with many dirs and needs to operate with many own ENVs).
Trying simple:
my $filename = '.envfile';
open FH, '>', $filename or die $!;
print FH "some data\n";
close(FH);
Apache says: Permission denied at /var/www/cgi-bin/env.cgi line 41.
Any help appreciated!
Thanks!
If I understand the question correctly, it appears that you also control the software which will ultimately read the file you're trying to create. Is that accurate? If so, change the program to get its environment from somewhere else. Where else? Preferably a new directory, so that you can make it writable by your web server without affecting anything else. I'd probably use /etc/myprogram (because /etc is the standard place for configuration files) or /var/local/myprogram (because /var is the standard place for persistent data files). But not an existing directory which is and should remain writable solely by root.
Short of exploiting a security flaw, Perl does not allow you to sidestep filesystem security (permissions). And that is a Good Thing. If it were allowed, it would mean that anyone who finds an exploit in your Perl code could then change any file on your computer, potentially replacing it with the most malicious code ever written.
Thus, the only way that your Perl can create a file in / is if it runs as root or uses su/suid to run some other program as root. And you really, really, really do not want CGI scripts or web applications running as root because, unless you do everything absolutely perfectly in your code, and there are no exploitable bugs in perl itself, or apache, or the kernel, then, by running your web code as root, you're potentially handing root access to any random script kiddie on the internet.
If you really, truly, absolutely have no choice other than to have web-accessible code write arbitrary files to /, then the least-bad, least-insecure way to do it would be to create a very tiny helper program which takes a file name and file contents as inputs, checks to verify that the named file does not already exist (so that an attacker can't use it to overwrite, say, your kernel), and then creates the named file with the provided contents. Aside from maybe a little additional sanity/security checking, it should do absolutely nothing else because the more complex this helper program is, the more likely it is to contain exploitable flaws. Then have the web code use suid to run the helper program, with suid configured to allow the web user (and only the web user) to run the helper program (and only the helper program) with no password.
But don't do that unless you really, truly, absolutely have no other option. It is not the best way to do it, it is the least bad way. Which means it's still a bad idea.
Create the file 'by hand' and set it's owner to the owner of the apache process, e.g.:
sudo touch /.envfile
sudo chown www-data:www-data /.envfile
sudo chmod u+rw /.envfile
You're executing your Perl program as a user without sufficient privilege. Run the Perl program using a user with sufficient privilege (e.g. using sudo or su).
When writing a program, the program is sort of running in a particular directory which is the current working directory.
I'm trying to understand more about the idea of a cwd. How does a program know what its cwd is? Where is that information stored?
I know perfectly well how to use the os module in python, but I don't really understand what it means to have a cwd. Is it simply a data attribute, "this is where we are", that we can change arbitrarily? And we simply look for things and create things on that particular section of the HD? Or is some sort of pathway actually opening and closing actively when we change cwd, like a door getting shut and another being opened?
What happens on the computer when I change cwd in a program?
This may be language-agnostic, I am unsure.
The current working directory is (at least on most operating systems) an attribute of a process, so yes, it is more or less a simple attribute stating "this is where we are". As it's an attribute of a process, it is stored and managed by the OS kernel.
It can be changed arbitrarily by calling e.g. os.chdir in python, and a shell would similarly change its working directory each time you run the builtin cd command. And they both would normally call the same API of the operating system, e.g. chdir(). Changing the cwd is subject to filesystem permissions, so you can only change the working directory to a path that actually exists and you have permissions to.
The cwd may also be involved in file operations, as when a process opens a file path that is not an absolute path, the file name will be resolved relative to the cwd of the process.
On unix systems the cwd is inherited from the parent process, as such the cwd of a process you start from a shell will have its cwd to the directory you are in when you start the process (and not e.g. the directory of the executable you start).
I've got a Clipper system writing csv files to a Windows directory. I have a Perl script running on a Linux server that is reading a mount of that Windows directory and importing the files to a database.
Right now we're using flag files to indicate when a csv is no longer being written to; the flag file gets written after the csv is done. I'd really rather just get what I need from the csv itself, but I can't seem to find a way to tell when the file is open and being written to.
lsof doesn't seem to answer my need. I've tried using flock and open the file with an exclusive lock, thinking it might throw an error if the file is being modified, but it doesn't.
Any thoughts?
According to linux's man page for flock:
flock() does not lock files over NFS.
Use fcntl(2) instead: that does work
over NFS, given a sufficiently recent
version of Linux and a server
which supports locking.
Have you tried using fcntl()? Googling I find a few examples of people using fcntl with CIFS.
It may not work since you're mounting to Windows, but maybe inotify would help.
If inotify does not work, use Poor Man's Polling: if modification time is older than two seconds, the file is finished writing.
Something like Linux::Inotify2 or File::Monitor would do the trick for monitoring the files.
I have a Perl script that contains this code snippet, which calls the system shell to get some files by SFTP and unzip them with WinZip:
# Run script to get files from remote server
system "exec_SFTP.vbs";
# Unzip any files that were retrieved
foreach $zipFile (<*.zip>) {
system "wzunzip $zipFile";
}
Even if some files are retrieved, they are never unzipped, because by the time the files are retrieved and the SFTP connection is closed, the Perl script has already completed the unzip step, with the result that it doesn't find anything to unzip.
My short-term fix is to insert
sleep(60);
before the unzip step, but that assumes that the SFTP connection will finish within 60 seconds, which may sometimes be a gross over-estimate, and other times an under-estimate.
Is there a more sound way to cause Perl to pause until the SFTP connection is closed before proceeding with the unzip step?
Edit: Responders have questioned (and reasonably so) the use of a VB script rather than having Perl do the file transfer. It has to do with security -- the VB script is maintained by others and is authorized to do the SFTP.
Check the code in your *.vbs file. The system function waits for the child process to finish before execution continues. It appears that your *.vbs file is forking a background task to do the FTP and returning immediately.
In a perfect world your script would be rewritten to use Net::SFTP::Foreign and Archive::Extract..
An ugly quick-hackish kind of way might be to create a touch-file before your first system call, alter your sftp-fetching script to delete the file once it is done and have a while like so
while(-e 'touch.file') {
sleep 5;
}
# foreach [...]
Of course, you would need to take care if your .vbs fails and leaves the touchfile undeleted and many other bad side effects. This would be for a quick solution (if none of the other suggestions work) until you get the time to rewrite without system() calls.
You need a way for Perl to wait until the SFTP transfer is done, but as your script is currently written, Perl has no way of knowing this. (It looks like you're combining at least two scripting languages and a (GUI?) SFTP client; this can work, but it's not exactly reliable or robust. Why use VBscript to start the SFTP transfer?)
I can think of four options:
Your Perl script could do the SFTP transfer itself, using something like CPAN's Net::SFTP module, rather than spawning an external job whose status it cannot track.
Your Perl script could spawn a command-line SFTP utility (like PSFTP) that doesn't return until the transfer is done.
Or change exec_SFTP.vbs script to not return until the transfer is done.
If you're currently using a graphical SFTP client and can't switch for whatever reason, I'd recommend using a scripting language like AutoIt instead of Perl. AutoIt has features to wait for windows to change state and so on, so it could more easily monitor for an activity's completion.
Options 1 or 2 would be the most robust and reliable.
The best I can suggest is modifying exec_SFTP.vbs to exit only after the file transfer is complete. system waits for the program it called to complete, so that should solve your problem:
system LIST
system PROGRAM LIST
Does exactly the same thing as "exec LIST", except
that a fork is done first, and the parent process
waits for the child process to complete.
If you can't modify the vbs script to stay alive until it terminates, you may be able to track subprocess creation. If you get subprocess ids, you can monitor them thereby know when the vbs' various offspring terminate.
Win32::Process::Info lets you get a subprocess ids from a running process.
Maybe this is a dumb question, but why not just use the Net::SFTP and Archive::Extract Perl modules to download and unzip the files?
system will not return until the shell it's running the command in has returned; this may be wrong for launching graphical programs and file associations.
See if any of the following help?
system('cscript exec_SFTP.vbs');
use Win32::Process;
use Win32;
Win32::Process::Create(my $proc, 'wscript.exe',
'wscript exec_SFTP.vbs', 0, NORMAL_PRIORITY_CLASS, '.');
$proc->Wait(INFINITE);
Have a look at IPC::Open3
IPC::Open3 - open a process for reading, writing, and error handling using open3()