Checking if a file is still open - perl

I use ffmpeg to reduce size and convert a video file with a batch. Meanwhile, I'd like to check if the converting process of this video is done, using a Perl script.
Is the -t operator checking that ?
Or a simple executable check -x does the trick ? Or something else ?
Thank you !

It's inadvisable to argue with people whose help you're getting for free
It's quite possible to examine what file handles are open and by what processes, but the method varies according to the operating system. And it sounds like you're running ffmpeg on a remote system so it's even less straightforward
The usual method would be cooperative locking, but ffmpeg doesn't do that
If you're running a batch job, then the obvious way is for the job to create a flag file once the ffmpeg run is complete. Then you need only to wait for the existence of that file to be sure that ffmpeg has finished
Please don't be overconfident in future, or you will get only the answers that you deserve

Related

Reduce relocatable win32 Perl to as few files and bytes as possible

I'm trying to use a perl program on a Windows HTCondor computing cluster. The way HTCondor on windows works is it copies all dependencies into a temporary directory (used as a chroot of sorts) and then it deletes the directory after the specified outputs are moved to a designated place.
If I take only perl.exe and perl514.dll and make a job like this: perl -e "print qq/hello\n/" and tell the cluster to run it 200 times, then each replication winds up taking about 15 seconds, which is acceptable overhead. That's almost all time spent repeatedly copying the files over the network and then deleting them. echo_hello.bat run 200 times takes more like two seconds per replication.
The problem I have is that when I try to use my full blown perl distribution of 55MB and 2,289 files, a single "hello" rep takes something like four minutes of copying and deleting, which is unacceptable. When I try to do many runs the disks on the machines grind to a halt trying to concurrently handle all the file operations across all the reps, so it doesn't work at all. I don't know how long it might take to eventually finish because I gave up after half an hour and no jobs had finished.
I figured PAR::Packer might fix the issue, but nope. I tried print_hello.exe created like this: pp -o print_hello.exe -e "print qq/hello\n/". It still makes things grind to a halt, apparently by swamping the filesystem. I think a PAR::Packer executable makes a ton of temporary files as it pulls out files it needs from the archive. I think the windows file system totally chokes when there are a bunch of concurrent small file operations.
So how can I go about cutting down the perl I built to something like 6MB and a dozen files? I'm really only using a tiny number of core modules and don't need most of the crap in bin and lib, but I have no idea how to proceed ripping out stuff in a sane way.
Is there an automated way to strip away un-needed files and modules?
I know TCL has a bunch of facilities for packing files into a single uncompressed archive that can then be accessed through a "virtual filesystem" without expanding the file. Is there some way to do this with perl itself sort of like with PAR? The problem is PAR compresses everything and then has to extract to temporary files, rather than directly work through a virtual filesystem layer. (If I understand correctly.)
My usage of perl is actually as a scripting layer. It's embedded in a simulation. So I'm really running my_simulation.exe which depends on per514.dll, but you get the idea. I also cannot realistically do anything to the HTCondor cluster other than use it. So there's no need to think outside the box on what I should be using instead of perl and what I could administratively tweak in Windows and HTCondor, thanks.
You can use Module::ScanDeps to get list of actual dependencies of your perl. It was terrible, that it took significant amount of time, when PAR::Packer unpacked the whole application, so I decided to build the executable by myself.
Here is my ready to use script which gathers perl dependencies into some directory; it might be useful for you to reduce the number of perl-modules, e.g. by manually removing some dependencies after copying.
In theory (I have never tried that), the next your step could be merge all pure-perl dependencies into single file (like deps.pm); although it might be non-trivial due to perl's autoload magic and some other tricks.
You can list the modules that are needed by your program using the very nice ListDependencies module
To my knowledge it isn't downloadable anywhere, but it is simple to copy and paste into your own ListDependencies.pm file
You should read the POD documentation within the module for usage instructions

SVG to PDF (with Perl Cairo?)

In a perl script, I try to convert svg files to pdf. This works great by just refering to Inkscape:
system "inkscape -D -z --file=$in --export-pdf=$out";
But it is enormously slow even for little 100 KB files, I mean it can be minutes per file, causing the script to fail when running with a time-out constrain, eg. on a webserver.
To speed up, I have read about svg2pdf as a standalone, but never found a binary for Win7 or managed to compile it, even with the libcairo dlls present.
My last idea now is to use the CPAN module Cairo. It makes me hoping that it can convert an svg file to pdf, but in the documentation I only find drawings and surfaces, but no method to write/convert.
Has anyone experience with that?
Making my comment an answer: You could try rsvg-convert which is part of the librsvg library. It's probably faster than Inkscape but it's still an external command.

Low CPU Usage with dbPoweramp Powershell

I am using a program called dbPoweramp to convert music from within Powershell. I am using the documentation here which was all I could find for it when searching. Whenever I use the program itself to convert I get 100% CPU usage and it fully utilizes all eight threads. However, whenever I launch through the command line I only get something around 13% CPU usage. It obviously isn't desirable to have to launch the program manually because I am going for automation here. I have tried messing with the -processors argument but it has made no difference. Does anyone have any idea as to why that would be?
I have also tried using FFMPEG instead, but the CPU usage for FFMPEG is similarly low. If anyone could post code that would make FFMPEG utilize all eight cores that would work just as well.
Here is the section of code that does the actual conversion, essentially it just searches for all flac, m4a, or mp3 files and then automatically converts them to variable bitrate quality 1 mp3s for streaming.
$oldMusic = Get-ChildItem -Include #("*.flac", "*.m4a", "*.mp3") -Path $inProcessPath -Recurse #gets all of the music
cd 'C:\Program Files (x86)\Illustrate\dBpoweramp'
foreach ($oldSong in $oldMusic) {
$newSong = [io.path]::ChangeExtension($oldSong.FullName, '.mp3')
$oldSongPath = $oldSong.FullName
$newSongPath = "E:\Temp\$newSong"
.\CoreConverter.exe -infile= $oldSongPath -outfile= $newSong -convert_to= "mp3 (Lame)" -V $quality #converts the file
}
Thanks in advance!
I don't think the encoder runs on more than a single thread. I think that it encodes up to 8 tracks at a time, one on each core. In your example, the encoding will happen serially meaning that you're only going to use one core at a time. The same will occur with FFmpeg.
I'm no Powershell guy, but if you can get it to run up to 8 processes at once, you won't have this problem.

Is there a simple way to effectively cat a filestream without writing to disk?

I'm working on a system to scan remote files for viruses. I'm downloading as a stream and would like to avoid saving unscanned files to disk for obvious reasons.
I can use clamscan for scanning the stream, but I'm not sure how to generate that stream in the command line. Both echo and the command line have the potential of playing games with what is actually being output if I did something like the following:
system("echo $data | clamscan -");
Are there any elegant solutions to achieving this that I am missing? Obviously I could probably filter the file dump with some stream editor before it hits clamscan, but that is definitely not elegant and prone to error, I would think.
You could use popen(). However, it has its limitations. Anything more sophisticated will require you to play with your pipes and spawning of processes.

Parsing syslogs with Perl using a named pipe?

I'm trying to write a script that will grab logs across a network and parse them for relevant information and perform some action (email if there's a critical issue, simply write to a log file if its a warning). I am using an AIX machine with syslogd to process the logs. Right now it is performing like usual, writing all logs to files ... a lot of files.
I was advised to use Perl and Named Pipes to implement the script. I've just spent some time reading up on named pipes and I find them quite fascinating. However, I'm stumped as to how the "flow" of information should work in this situation and how to make perl handle it.
For example, should I create a fifo outside of the script and tell syslogd to write to it by default and have my script on the other end parsing it? Can Perl do that and (for you sysadmins) is this a smart/possible option?
This is my first encounter with Perl and with named pipes.
You can surely create a named pipe in Perl, although it seems to me that for what you are trying to do, it is better to create the named pipe outside of perl, as you are suggesting, and then have syslogd write to it, and read the pipe from perl.
I don't know very well AIX, but this could do for creating a pipe (source):
mkfifo -p /var/adm/syslog.pipe
To have syslogd write to it, define this in /var/adm/syslog.pipe:
*.info |/var/adm/syslog.pipe
Then:
kill -HUP `cat /var/run/syslogd.pid`
You could also put all this stuff into your perl script: in case the pipe did not exist or syslogd were not using it, the script would arrange all required things for you.
Possibly you could provide some more details as to what you are trying to do, if you need more help.