What module would you recommend to get a disk name on Linux? I've done some search on CPAN but all modules I've found are too old. In Bash I can use something like:
disk_name=$(df |grep -w '/'|awk '{print $1}'|cut -d/ -f3)
echo $disk_name
sda6
Please help me to understand how to do same in Perl.
Thanks.
The "proper" way to list mounted disks on Linux is through the getmntent() system call, which can be accessed from Perl using the Quota module:
use Quota;
Quota::setmntent();
while (my ($dev, $path, $type, $opts) = Quota::getmntent()) {
print "The root device is $dev.\n" if $path eq "/";
}
Quota::endmntent();
As a bonus, using the Quota module to list device mount points should be fairly portable to other Unixish systems, which parsing various system files or the output of df may not be. Unfortunately, this seemingly basic module is not included in the standard Perl distribution, so you have to get it from CPAN (or from your distro's package repository — for example, Debian / Ubuntu have the libquota-perl package).
Ps. Simply splitting the device name on / and taking the third element (as your cut command does) is not a safe way to turn, say, /dev/sdb1 into sdb1. Some issues with it are that:
Not all block devices have to live under /dev — it's really just a convention.
Even if the device file is under /dev, it might be in a subdirectory of it. For example, my root filesystem is on the device /dev/disk/by-uuid/627f8512-f037-4c6c-9892-6130090c0e0f.
Sometimes, the device name might not even be an actual filesystem path: for example, virtual or in-memory filesystems such as tmpfs are often mounted with the device name none, but it's possible to use any device name with them.
If you do want to get rid of the /dev/ part, I'd suggest a conservative approach using a regexp, for example like this:
if ($dev =~ m(^/dev/(.*)$)s) {
print "The directory $path is mounted from device $1 under /dev.\n";
} else {
print "The directory $path is not mounted from a device under /dev.\n"
}
What you're describing is not the disk name but the device name of the block device representing the partition mounted at root (/). On a regular computer it would normally be something like /dev/sdXN or /dev/hdXN with X being the disk number (primary hard drive is usually A, secondary is B, etc.) and N is the partition number on that device.
Provided you're always running on a unix system, you can try reading /etc/mtab file, which lists all mounted partitions, or the special file /proc/mounts, which pretty much does the same. You'll need to parse it afterwards to find the one you need and get the device name from it.
Alternatively, you can just run df as a process and get its input into perl, something like
open(DF, "df|");
#mount_points = <DF>;
close(DF);
and then iterate over the data to find what you need. I'm not aware of any modules of the top of my head that would do the job for you, but the code seems pretty simple to me anyway.
P.S. Note that Max OS X, while being a derivative of BSD, doesn't have the same file structure and therefore this approach wouldn't work. On Mac OS X, you can read file /etc/fstab.hd, which contains similar info but in a slightly different format.
One way to do just what you are doing in the question
df / | perl -ne 'm"^/\w+/(\w+)";print "$1\n" if defined $1;'
but using a CPAN library to do it is probably better.
Related
I have a program that checks to see if the files in my directory are readable,writeable, and executable.
i have it set up so it looks like
if (-e $file){
print "exists";
}
if (-x $file){
print "executable";
}
and so on
but my issue is when I run it it shows that the text files are executable too. Plain text files with 1 word in them. I feel like there is an error. What did I do wrong. I am a complete perl noob so forgive me.
It is quite possible for a text file to be executable. It might not be particularly useful in many cases, but it's certainly possible.
In Unix (and your Mac is running a Unix-like operating system) the "executable" setting is just a flag that is set in the directory entry for a file. That flag can be set on or off for any file.
There are actually three of these permissions why record if you can read, write or execute a file. You can see these permissions by using the ls -l command in a terminal window (see man ls for more details of what various ls options mean). There are probably ways to view these permissions in the Finder too (perhaps a "properties" menu item or something like that - I don't have a Mac handy to check).
You can change these permissions with the chmod ("change mode") command. See man chmod for details.
For more information about Unix file modes, see this Wikipedia article.
But whether or not a file is executable has nothing at all to do with its contents.
The statement if (-x $file) does not check wether a file is an executable but if your user has execution priveleges on it.
For checking if a file is executable or not, I'm affraid there isn't a magic method for it. You may try to use:
if (-T $file) for checking if the file has an ASCII or UTF-8 enconding.
if (-B $file) for checking if the file is binary.
If this is unsuitable for your case, consider the following:
Assuming you are on a Linux enviroment, note that every file can be executed. The question here is: The execution of e.g.: test.txt, is going to throw a standard error (STDERR)?
Most likely, it will.
If test.txt file contains:
Some text
And you launched it in your Perl script by: system("./test.txt"); This will display a STDERR like:
./test.txt: line 1: Some: command not found
If for some reason you are looking to run all the files of your directory (in a for loop for instance) be warned that this is pretty dangerous, since you will launch all your files and you may not be willing to do so. Specially if the perl script is in the same directory that you are checking (this will lead to undesirable script behaviour).
Hope it helps ;)
Problem outline:
need allow upload ZIP files (and tgz and more compressed directory trees) via web-from
the zip files should be extracted for their content handling
planning to use Archive::Extract for the extracting
here are things like ZIP BOMBS and like...
From the manual
Archive::Extract can use either pure perl modules or command line
programs under the hood. Some of the pure perl modules (like
Archive::Tar and Compress::unLZMA) take the entire contents of the
archive into memory, which may not be feasible on your system.
Consider setting the global variable $Archive::Extract::PREFER_BIN to
1 , which will prefer the use of command line programs and won't
consume so much memory.
The questions are:
When I set the $Archive::Extract::PREFER_BIN = 1 - i'm enough protected againist ZIP-BOMB like things?
$Archive::Extract::PREFER_BIN protect me againist much memory usage - but, the standard unzip, tar -z unrar binaries are safe againist zip bomb like attacks?
If not - how to handle safely uploaded compressed directory tree? (so here is not only one file inside the e.g zip archive).
$Archive::Extract::PREFER_BIN = 1 doesn't protect you against zip bombs, you are passing the problem to the binary unzip tool of your system.
This SO question may helps you. I like the idea of running a second process with ulimit.
I'm looking for a virtual filesystem layer in Perl. Something that would provide a general abstraction for basic filesystem routines like ls, mkdir and so on, regardless how the actual filesystem is implemented.
I'd like an interface like this:
# create a directory "/some/path/tmp" in my current filesystem
my $plainfs = Module::new->(type => 'local', root=>'/some/path);
$plainfs->mdkir("/tmp");
# create "tmp" dir on a remote filesystem
my $sshfs = Module::new->(type=>'ssh', root=>'user:password#example.com:~/pub')
$sshfs->mdkir("/tmp");
I found the VFS package on MetaCPAN, unfortunately there are only empty, unimplemented modules.
Is something already implemented? Right now, I'm looking for only “local” filesystems and ftp or ssh—I don't need a database “filesystem” or any other exotic “filesystem” like CVS or so. Searching 20k MetaCPAN modules is painful without any tagging system or alike…
Perhaps File::System is what you're looking for. It provides basic functionalities found in common operating systems for managing a virtual file system (not necessarily comprised only of files and directories).
Most of the functionalities are presented as method of the File::System::Object package.
what about some FUSE implementation? ( file system in userspace ) ? I would guess there is at least one pseudo-filesystem implemented in perl based on that. After all, it should be quite easy to implement, basically it's no more than some set of operations like mount, ls, df, stat and so on. I was once through autofs sources in C, looked pretty straightforward. You might want to see http://code.google.com/p/mogilefs/ as well.
Don't be too stuck up on the module approach. All you need is some utility that mounts SSH/FTP filesystem as a local filesystem and then you will simply use standard commands like cd, mkdir and so on. The reason why you don't see any modules for this is that this approach is generally preferred.
Look at http://sourceforge.net/apps/mediawiki/fuse/index.php?title=FileSystems
You will simply use FUSE to mount any of those file systems and that is it. Here are some links to look at, but most of those can be got as packages in most distributions too.
http://sourceforge.net/projects/lufs/
http://lftpfs.sourceforge.net
Here is module to simply mount FUSE file systems within perl:
http://search.cpan.org/~dpavlin/Fuse/Fuse.pm
There are a LOT of File::* modules which handle different parts of cross-platform filesystem management.
For example:
use File::Spec::Functions qw(catfile);
Will let you get my $filename = catfile $root, $path, "$filename.$ext"; or my $new_directory = catfile $path, "new_sub_directory"; and be sure to use the correct separators, e.g. / or \, et cetera.
Another thing you seem to want can be had with:
use File::Path qw(make_path);
which is pretty handy, and can be called like make_path($new_directory, { mode => 0755 });
I'm not really sure if File::System actually handles remote systems the way you want.
A couple different ways occur to me to handle that, but I think Net::SSH::Expect is what I've used in the past, and isn't too bad, although you'd probably have an easier time if you could somehow mount the remote filesystem locally, do what you have to do, then unmount it.
My question is in two parts :
1) Why does grep hang when I grep all files under "/" ?
for example :
grep -r 'h' ./
(note : right before the hang/crash, I note that I see some "no such device or address" messages , regarding sockets....
Of course, I know that grep shouldn't run against a socket, but I would think that since sockets are just files in Unix, it should return a negative result, rather than crashing.
2) Now, my follow up question : In any case -- how can I grep the whole filesystem? Are there certain *NIX directories which we should leave out when doing this ? In particular, I'm looking for all recently written log files.
As #ninjalj said, if you don't use -D skip, grep will try to read all your device files, socket files, and FIFO files. In particular, on a Linux system (and many Unix systems), it will try to read /dev/zero, which appears to be infinitely long.
You'll be waiting for a while.
If you're looking for a system log, starting from /var/log is probably the best approach.
If you're looking for something that really could be anywhere in your file system, you can do something like this:
find / -xdev -type f -print0 | xargs -0 grep -H pattern
The -xdev argument to find tells it to stay within a single filesystem; this will avoid /proc and /dev (as well as any mounted filesystems). -type f limits the search to ordinary files. -print0 prints the file names separated by null characters rather than newlines; this avoid problems with files having spaces or other funny characters in their names.
xargs reads a list of file names (or anything else) on its standard input and invokes the specified command on everything in the list. The -0 option works with find's -print0.
The -H option to grep tells it to prefix each match with the file name. By default, grep does this only if there are two or more file names on its command line. Since xargs splits its arguments into batches, it's possible that the last batch will have just one file, which would give you inconsistent results.
Consider using find ... -name '*.log' to limit the search to files with names ending in .log (assuming your log files have such names), and/or using grep -I ... to skip binary files.
Note that all this depends on GNU-specific features. Some of these options might not be available on MacOS (which is based on BSD) or on other Unix systems. Consult your local documentation, and consider installing GNU findutils (for find and xargs) and/or GNU grep.
Before trying any of this, use df to see just how big your root filesystem is. Mine is currently 268 gigabytes; searching all of it would probably take several hours. A few minutes spent (a) restricting the files you search and (b) making sure the command is correct will be well worth the time you spend.
By default, grep tries to read every file. Use -D skip to skip device files, socket files and FIFO files.
If you keep seeing error messages, then grep is not hanging. Keep iotop open in a second window to see how hard your system is working to pull all the contents off its storage media into main memory, piece by piece. This operation should be slow, or you have a very barebones system.
Now, my follow up question : In any case -- how can I grep the whole filesystem? Are there certain *NIX directories which we should leave out when doing this ? In particular, Im looking for all recently written log files.
Grepping the whole FS is very rarely a good idea. Try grepping the directory where the log files should have been written; likely /var/log. Even better, if you know anything about the names of the files you're looking for (say, they have the extension .log), then do a find or locate and grep the files reported by those programs.
My hosted scripts have been moved and no longer work.
The specified CGI application
misbehaved by not returning a complete
set of HTTP headers.
I notice that someone at my host company has modified my scripts so that where I used to have
use lib 'd:/myorig/LIB';
I now have
use lib '//newhost/LIB';
Should this work?
I tried 1800 INFORMATION's suggestion and ran the minimal script of
#!perl -w
use lib '//whatever/lib';
print "success";
...which gave the same result.
Update: ysth's suggestion of FatalsToBrowser did indeed reveal more information. It looks like the path (added by someone from the hosting company) might be wrong.
Update2: The hosting company now says that these scripts, unchanged from the previous host mind, are throwing lots of syntax errors. "Since we cannot debug your scripts for you we suggest you contact the original programmer and ask them for help". <grinds teeth>
Partial Resolution: The hosting company finally realised they hadn't set permissions correctly. They still aren't right, and (aargh) they don't allow site owners to set folder permissionsn, not even on folders within their own sites.
I don't know if it should work or not, but my intuition is that it would be okay. However, the two use lib lines you posted are not equivalent.
# go to the 'd' drive and use the 'myorigLIB' directory on that drive
use lib 'd:/myorigLIB';
# go to the 'newhostLIB' server - no path is specified - this looks invalid to me
use lib '//newhostLIB';
Perhaps you need to specify the path to the share on the server? Also, you might need to look at permissions? Maybe the user the CGI is running as cannot access that network path?
Also, you could write a simple (non CGI) program to test your theory and just run it:
#!perl -w
use lib '//whatever/lib';
print "success";
Then just run that on the server if you can and see what happens.
No the path is incomplete it needs both a server name and a complete path. It is a bad practice as well because it requires that two machines be monitored rather than one for your application to function.
The specified CGI application misbehaved by not returning a complete set of HTTP headers.
That's a non-error. If you are lucky, your hosting company will make an error log available to you that will show the actual error that perl is dying with. If not,
consider using
use CGI::Carp "fatalsToBrowser";
for testing. (If you are paranoid (which is not a bad thing to be), you will refrain from leaving that enabled once you are done testing, since errors can commonly provide information about your code or even your database that may help a black hat exploit security holes.)
I know I ran into trouble trying to use mapped drives and unc paths from apache because the apache user was not allowed to use network drives. That was difficult to figure out -- but it's possible to do it. That may be a related problem.
#!perl -w
print "HTTP/1.0 200 OK\nContent-Type: text/plain\n\n";
my $path = "//whatever/lib";
print "\nExists ", -e $path;
print "\nDirectory ", -d $path;
print "\nReadable ", -r $path;
print "\nListing:\n";
print "\t$_\n" for glob "$path/*";