Low CPU Usage with dbPoweramp Powershell - powershell

I am using a program called dbPoweramp to convert music from within Powershell. I am using the documentation here which was all I could find for it when searching. Whenever I use the program itself to convert I get 100% CPU usage and it fully utilizes all eight threads. However, whenever I launch through the command line I only get something around 13% CPU usage. It obviously isn't desirable to have to launch the program manually because I am going for automation here. I have tried messing with the -processors argument but it has made no difference. Does anyone have any idea as to why that would be?
I have also tried using FFMPEG instead, but the CPU usage for FFMPEG is similarly low. If anyone could post code that would make FFMPEG utilize all eight cores that would work just as well.
Here is the section of code that does the actual conversion, essentially it just searches for all flac, m4a, or mp3 files and then automatically converts them to variable bitrate quality 1 mp3s for streaming.
$oldMusic = Get-ChildItem -Include #("*.flac", "*.m4a", "*.mp3") -Path $inProcessPath -Recurse #gets all of the music
cd 'C:\Program Files (x86)\Illustrate\dBpoweramp'
foreach ($oldSong in $oldMusic) {
$newSong = [io.path]::ChangeExtension($oldSong.FullName, '.mp3')
$oldSongPath = $oldSong.FullName
$newSongPath = "E:\Temp\$newSong"
.\CoreConverter.exe -infile= $oldSongPath -outfile= $newSong -convert_to= "mp3 (Lame)" -V $quality #converts the file
}
Thanks in advance!

I don't think the encoder runs on more than a single thread. I think that it encodes up to 8 tracks at a time, one on each core. In your example, the encoding will happen serially meaning that you're only going to use one core at a time. The same will occur with FFmpeg.
I'm no Powershell guy, but if you can get it to run up to 8 processes at once, you won't have this problem.

Related

Checking if a file is still open

I use ffmpeg to reduce size and convert a video file with a batch. Meanwhile, I'd like to check if the converting process of this video is done, using a Perl script.
Is the -t operator checking that ?
Or a simple executable check -x does the trick ? Or something else ?
Thank you !
It's inadvisable to argue with people whose help you're getting for free
It's quite possible to examine what file handles are open and by what processes, but the method varies according to the operating system. And it sounds like you're running ffmpeg on a remote system so it's even less straightforward
The usual method would be cooperative locking, but ffmpeg doesn't do that
If you're running a batch job, then the obvious way is for the job to create a flag file once the ffmpeg run is complete. Then you need only to wait for the existence of that file to be sure that ffmpeg has finished
Please don't be overconfident in future, or you will get only the answers that you deserve

PowerShell monitoring external processes as they run

I have a script that runs an external process which creates some output (a file) and then I also capture console output to file (the log)
try
{
p4d -r $root -jc | out-file $output
}
I later check the log output, grab some info and the script carries on.
The problem is that the external process could (and has once) stalled and I need a way to check that on the fly to handle the error.
The best way I can think to do this is to monitor the file that the process creates for increasing size. Obviously this isn't without issue as it could potentially stall at any point and we don't know the resulting file size.
I will likely check the size of the last successful process and use that to set some limits.
My question is how do I achieve the whole check a process whilst it's running thing?

Running MATLAB system command in background with stdout

I'm using MATLAB and calling an .exe via the system command.
[status,cmdout] = system(command_s);
where command_s is a command string that is formatted earlier in my script to pass all the desired options to the .exe. The .exe would normally write to a .csv file via the > redirection operator in Windows/DOS. Instead, this output is going to cmdout where I use it later in the MATLAB script. It is working correctly and as expected. I'm doing it this way so that the process just uses memory and does not write a very large file to the disk, which would then have to be read from the disk and then deleted after I'm done with it. In the end, it saves a .mat file that's usually in hundreds of KB instead of 10s/100s of MBs as the .csv file would be (some unneeded data is thrown out in the end).
The issue I'm having is since I'm dealing with large files, the executable can take a significant amount of time. I typically have to wait about 2 minutes after executing this command. In the meantime, I have no feedback to know it is progressing and that my system hasn't froze. I know I could add the & symbol to the end of my string, command_s, and run MATLAB code while this is running in the background (or asynchronously as some would say), but that brings up an external window AND makes cmdout empty - so I cannot use the output - forcing me to sit there for 2 minutes wondering each time it executes.
Is there any way to run in the background AND get the stdout from the command?
Maybe you could try system(command_s,'-echo')?

Reduce relocatable win32 Perl to as few files and bytes as possible

I'm trying to use a perl program on a Windows HTCondor computing cluster. The way HTCondor on windows works is it copies all dependencies into a temporary directory (used as a chroot of sorts) and then it deletes the directory after the specified outputs are moved to a designated place.
If I take only perl.exe and perl514.dll and make a job like this: perl -e "print qq/hello\n/" and tell the cluster to run it 200 times, then each replication winds up taking about 15 seconds, which is acceptable overhead. That's almost all time spent repeatedly copying the files over the network and then deleting them. echo_hello.bat run 200 times takes more like two seconds per replication.
The problem I have is that when I try to use my full blown perl distribution of 55MB and 2,289 files, a single "hello" rep takes something like four minutes of copying and deleting, which is unacceptable. When I try to do many runs the disks on the machines grind to a halt trying to concurrently handle all the file operations across all the reps, so it doesn't work at all. I don't know how long it might take to eventually finish because I gave up after half an hour and no jobs had finished.
I figured PAR::Packer might fix the issue, but nope. I tried print_hello.exe created like this: pp -o print_hello.exe -e "print qq/hello\n/". It still makes things grind to a halt, apparently by swamping the filesystem. I think a PAR::Packer executable makes a ton of temporary files as it pulls out files it needs from the archive. I think the windows file system totally chokes when there are a bunch of concurrent small file operations.
So how can I go about cutting down the perl I built to something like 6MB and a dozen files? I'm really only using a tiny number of core modules and don't need most of the crap in bin and lib, but I have no idea how to proceed ripping out stuff in a sane way.
Is there an automated way to strip away un-needed files and modules?
I know TCL has a bunch of facilities for packing files into a single uncompressed archive that can then be accessed through a "virtual filesystem" without expanding the file. Is there some way to do this with perl itself sort of like with PAR? The problem is PAR compresses everything and then has to extract to temporary files, rather than directly work through a virtual filesystem layer. (If I understand correctly.)
My usage of perl is actually as a scripting layer. It's embedded in a simulation. So I'm really running my_simulation.exe which depends on per514.dll, but you get the idea. I also cannot realistically do anything to the HTCondor cluster other than use it. So there's no need to think outside the box on what I should be using instead of perl and what I could administratively tweak in Windows and HTCondor, thanks.
You can use Module::ScanDeps to get list of actual dependencies of your perl. It was terrible, that it took significant amount of time, when PAR::Packer unpacked the whole application, so I decided to build the executable by myself.
Here is my ready to use script which gathers perl dependencies into some directory; it might be useful for you to reduce the number of perl-modules, e.g. by manually removing some dependencies after copying.
In theory (I have never tried that), the next your step could be merge all pure-perl dependencies into single file (like deps.pm); although it might be non-trivial due to perl's autoload magic and some other tricks.
You can list the modules that are needed by your program using the very nice ListDependencies module
To my knowledge it isn't downloadable anywhere, but it is simple to copy and paste into your own ListDependencies.pm file
You should read the POD documentation within the module for usage instructions

Is there a simple way to effectively cat a filestream without writing to disk?

I'm working on a system to scan remote files for viruses. I'm downloading as a stream and would like to avoid saving unscanned files to disk for obvious reasons.
I can use clamscan for scanning the stream, but I'm not sure how to generate that stream in the command line. Both echo and the command line have the potential of playing games with what is actually being output if I did something like the following:
system("echo $data | clamscan -");
Are there any elegant solutions to achieving this that I am missing? Obviously I could probably filter the file dump with some stream editor before it hits clamscan, but that is definitely not elegant and prone to error, I would think.
You could use popen(). However, it has its limitations. Anything more sophisticated will require you to play with your pipes and spawning of processes.