Version 5 UUID in Perl - perl

Offtopic:
I'm new to stack overflow, and I wanted to say hello!
On topic:
I'm generating a version 5 UUID for an application that needs randomized folder creation and deletion via a timestamp time() through
my $md5_UUID = create_uuid_as_string(UUID_MD5, time."$job");
These folders are generated per run on each job, and are deleted after running. If the same UUID is somehow generated, the +-1000 jobs that are running could halt.
Is there any information that I can pull from this or any possibility of collisions (different data generating the same UUID)? Are they truly unique? Also, which version of UUID should I use between SHA1 and MD5?

Use OS Tools
There's probably a pure Perl solution, but it may be overkill. If you are on a Linux system, you can capture the results of mktemp or uuidgen and use them in your Perl script. For example:
$ perl -e 'print `mktemp --directory`'
/tmp/tmp.vx4Fo1Ifh0
$ perl -e '$folder = `uuidgen`; print $folder'
113754e1-fae4-4685-851d-fb346365c9f0
The mktemp utility is nice because it will atomically create the directory for you, in addition to returning the directory name. You also have the ability to give more meaningful names to the directory by modifying the template (see man 1 mktemp); in contrast, UUIDs are not really good at conveying useful semantics.

If the folders last only the length of a job, and all the jobs are running on the same machine, you can just use the pid as a folder name. No need for uuids at all.

Use a v1 UUID
Perl's time() function is accurate to the second. So, if you're starting your jobs multiple times per second, or simultaneously on separate hosts, you could easily get collisions.
By contrast, a v1 UUID's time field is granular to nanoseconds, and it includes the MAC address of the generating host. See RFC 4122
for details. I can imagine a case where that wouldn't guarantee uniqueness (the client machines are VMs on separate layer-3 virtual networks, all with the same virtual MAC address), but that seems pathologically contrived.

Related

Perl Proc module running method keeps program hanged not returning back why?

PERL PROC Module running function never get exited - it keep running infinitely
#!/usr/bin/perl -w
use Proc::PID::File;
my %g_args = ( name => "temp", verify => 1, dir => "/home/username/");
print "Hello , world";
print Proc::PID::File->running(%g_args);
exit(0);
Even on CTRL + C its not being killed.
Its not even throwing any exception - where am I wrong.
I'm very beginner for PERL lang.
File locking on NFS mounted disks is problematic, even at the best of times. Proc::PID::File seems designed to operate on local filesystems (at least my perusal of the code doesn't indicate that it's taking the special care required to handle remote systems). Hanging on NFS problems is unfortunately typical of some NFS related problems. You will not be able to easily kill the process.
Is there some reason that you need to use the home directory? If you only need synchronization for jobs running on a single machine, /tmp should suffice. If you need to synchronize across multiple machines, then you should consider modules which are known to be more NFS safe, or use a client server model and avoid filesystems entirely. CPAN is full of solutions.

Any built in methods where I can get the UNIX server manufacturer / model and serial number?

I am writing a Perl script that will be deployed and executed on many servers. Some of my requirements are retrieving the manufacturer, model and serial number. Unfortunately I can't seem to figure out how to do that. I'm not seeing any built in libraries to do this.
I'm not sure if I can use libraries that don't come with Perl since I wouldn't be able to include those when it gets executed on the other servers.
Any thoughts?
There's a perl module called Parse::DMIDecode which will use the dmidecode program that Brian pointed out.
It's not Perl but you can invoke
$ sudo dmidecode
from within your script. That will dump the BIOS info and on my machine I get:
System Information
Manufacturer: Hewlett-Packard
Product Name: HP xw6600 Workstation
Version:
Serial Number: CXC9062H43
UUID: 53F3EB48-4CF9-DD11-BBDA-29023A11001F
Wake-up Type: Power Switch
SKU Number: RV725AV
Family: 103C_53335X
I don't know how much of the above is a) standard info b) populated by our service desk when provisioning PCs for our use. But it's worth investigating further.
From the man page for dmidecode:
dmidecode is a tool for dumping a computer's DMI (some say
SMBIOS) table contents in a human-readable format. This table contains
a
description of the system's hardware components, as well as other useful pieces of information such as serial numbers and BIOS
revision.
Thanks to this table, you can retrieve this information without having to probe for the actual hardware. While this is a good
point in
terms of report speed and safeness, this also makes the presented information possibly unreliable

Is there a perl function similar to lsof command in linux?

I have a shell script which archives log files based on the whether the process is running or not. If the log file is not used by the process then I archive it. Until now, I'm using lsof to get the log file being used but in future, I have decided to use perl to do this function.
Is there a perl module similar to what lsof in linux can perform ?
There is a perl module, which wraps around lsof. See Unix::Lsof.
As I see it, the big problem with not using lsof is that one would need to work in a way that is independent of the operating system. Using lsof allows the perl programmer to work with a consistent application allowing for operating system independence.
To have a perl module developer to write lsof would, in effect, be writing lsof as a library and then link that into perl - which is much more work than just using the existing binary.
One could also use the fuser command, which shows the process IDs with the file handle. There is also a module which seeks to implement the same functionality. Note from the perldoc:
The way that this works is highly unlikely to work on any other OS
other than Linux and even then it may not work on other than 2.2.*
kernels.
One might try walking /proc/*/fd and looking at the file descriptors in there to see if any are pointing to the file in question. If it is known what the process ID of a running process that would be opening the log file, it would be just as easy to look at that process. Note, that this is how the fuser module works.
That said, it should be asked "why do you want to move away from lsof"?

Query a remote server's operating system?

is there a way to query a server for its OS type in Perl? For example, if I knew that a remote server was running Windows, I might send it a winver from my local machine and get the output to determine which version of Windows it's running. Yet, is there a way to be even more abstract and simply ask "what are you?"
Since CPAN is huge, I was wondering if there were a module that encapsulated this sort of functionality.
If you can get command-line access on the remove server, then you should be able to use %ENV:
jmaney> perl -e 'print "$ENV{OSTYPE}\n";'
linux
Edit: It looks as though the key in Windows (or, at least on Windows 7 on my laptop) is OS. So, unfortunately, the exact solution via %ENV is OS-dependent... You could, however, check to see which of $ENV{OS} or $ENV{OSTYPE} is defined (and if they're both defined, then canonically pick which one you want to use), and proceed accordingly.
There is no foolproof way to do this, but the HTTP Server header -- which the server isn't required to send -- often contains the OS. For example, it may look like this (from Wikipedia):
Server: Apache/1.3.27 (Unix) (Red-Hat/Linux)
The Perl CGI module has an http function that gets the HTTP headers. You could use it like this:
my $server = $q->http('Server');
# Test $server for Windows, *nix, etc
# My Perl experience is minimal and I haven't used it in
# a while, so I'm not going to give an example here, but
# someone can feel free to edit one in.
CPAN probably has a module to do the testing on the Server header for you.

Need an opinion on a method for pull data from a file with Perl

I am having a conflict of ideas with a script I am working on. The conflict is I have to read a bunch of lines of code from a VMware file. As of now I just use SSH to probe every file for each virtual machine while the file stays on the server. The reason I am now thinking this is a problem is because I have 10 virtual machines and about 4 files that I probe for filepaths and such. This opens a new SSH channel every time I refer to the ssh object I have created using Net::OpenSSH. When all is said and done I have probably opened about 16-20 ssh objects. Would it just be easier in a lot of ways if I SCP'd the files over to the machine that needs to process them and then have most of the work done on the local side. The script I am making is a backup script for ESXi and it will end up storing the files anyway, the ones that I need to read from.
Any opinion would be most helpful.
If the VM's do the work locally, it's probably better in the long run.
In the short term, the ~equal amount of resources will be used, but if you were to migrate these instances to other hardware, then of course you'd see gains from the processing distribution.
Also from a maintenance perspective, it's probably more convenient for each VM to host the local process, since I'd imagine that if you need to tweak it for a specific box, it would make more sense to keep it there.
Aside from the scalability benefits, there isn't really any other pros/cons.