Batch encode mp4 With Handbrake CLi, same framesize - handbrake

Im have about 2000 files that differs little in frame-size and that I need to re-encode to 640x480. Since Handbrake doesn't work that way, maybe CLI is the way to go? Not sure how to work the commands though. I wold need batch conversion, and if possible, output the file to the sourcefplder. If possible, erasing the old in at the same time would be awesome. How would I go about doing this?

Related

Opening a very large file in emacs

I am trying open a very large file in Emacs and it fails to load. Is there a way to open only a portion of this very large file? I don't need to open the whole thing.
You may want to try vlf.el which basically runs head for you. It's still pretty crude, sadly.
You can use head command, store its output in file and read that file
http://unixhelp.ed.ac.uk/CGI/man-cgi?head
Windows equivalent is here:
Windows equivalent of the 'tail' command
If all you want is to read parts of a monster file then use this tool http://www.readfileonline.com/. It rapidly cracked open anything I fed it with.

Does anyone know of a good app/tool that allows you to edit MP4 files?

I need to be able to increase the volume, cut out silence, add silence etc. for mp4 files. Any help would be appreciated!
Oh, I thought you were asking about tools to edit the mp4 bitstream. Which I am interested in. I use VLC player for this, because it will give me a log file with errors when the MP4s I create don't work, but I am interested in better tools.
For editing mp4s youtube.com/editor is free, cross platform compatible.

How can I rotate and compress Log4perl log files?

From what I can tell neither Log4Perl or any of its related modules in CPAN supports rotate & compression of log files.
Rotation can be accomplished by using:
Log::Log4perl::Appender::File
Log::Dispatch::FileRotate.
But neither modules supports rotation and compression. (Log::Dispatch::FileRotate has it in its todo list, but it's not currently implemented).
It is possible to do this using the standard Logrotate facility in Linux, by using either Log::Log4perl::Appender::File's recreate_check_interval or recreate_check_signal.
From initial tests, it looks like using Logrotate with the delaycompress option will do the trick - even on a machine with high load, as once the file is moved, log4perl will continue logging to the same filehandle, until the signal is cought.
However, if delaycompress is not used, and there is (even a slight delay) between the compressing of the log file, and the catching the signal by the Perl program, some logging data might be lost.
What do you think? Are there other options we did not think of?
Over the years, I've found that you almost always want to use external methods for log file rotation with Log4perl. You simply avoid a lot of subtle issues (log delays, permission issues) that internal log rotates inevitably run into.
You've mentioned two methods that work with logrotate on Linux, why not stick with them? The Log4perl FAQ describes using newsyslog which is the FreeBSD equivalent of logrotate and provides similar features.
Have you thought about working with the Log::Dispatch::FileRotate's maintainers to add the features its missing and you need? It is open source after all. :)
If you don't want to deal with that yourself, there are various CPAN support consultancies that do that for you.
I contacted the author of Log::Dispatch::FileRotate, as suggested here, and he explained the reason why compression is not yet implemented in Log::Dispatch::FileRotate.
Basically, compressing right after rotation, might block the running process, during the compression which is pretty expensive.
The options suggested were to allow the user of Log::Dispatch::FileRotate to execute an arbitrary application on the file, just after rotation, thus doing it in another non blocking process.
Another suggestion was to have a filesystem trigger (like inotify) trigger the compression when the file is closed to writing by the main process.
Yet another suggestion , is to write the log file compressed through a gzip pipe, or one of the perl gzip modules. This works, but causes some problems (grep/less) won't work. zgrep and zless will work, but zgrep gives an ugly warning when grepping on a gzip file which is still open for writing. Using "tail" on the file will also not work - so this option isn't practical.

Apple DMG files over FTP are getting corrupted why?

I am trying to FTP some apple DMG files, if we do it by hand through Safari or IE it ends up at the destination just fine and uncorrupted. However, if I use a freeware FTP client that we had been using with great success for zip's and exe's or if I use a Powershell script I finished off (adapted from another stackover flow's question's answer) then I lose about a 1/2 Mb on a 10.5 Mb file and the dmg is corrupted. Does anyone have anyclues what could be going wrong? Things I could do to prevent it? So far all I have tried is gzipping the dmg before sending and that accomplished nothing. Again, anything but a dmg gets transmitted just fine.
FYI I am using binary mode transfers, so that is not it..thx though
Seems like your client treats dmg file as text file.
set Binary transfer mode in your ftp client and it will ftp it as is.
I always thought that ascii transfer mode in ftp is just plain stupid. It causes more trouble then it is worth.
Are you sure everything except a DMG gets transferred correctly? It sounds like a problem with the transfer encoding. FTP supports both binary and ASCII transfer types, mainly due to historical baggage. In ye old days, when bandwidth was scarer, leaving off the high bit (which ASCII doesn't use) was a good time saver. However, if you have any bytes with the bit set, ASCII transfer mode will lose them - hence "binary" mode, which truncates nothing.
Typically, the command to switch transfer modes is "bin" or "ascii".
Just so everyone knows. It must have been the client I was using had the exact same issue as my PowerShell script. I was using StreamReader to get the bytes for transfer and it was assuming an encoding which was not correct. I switched to a BinaryReader which does not, and it now works.

Why does making simple edits then uploading crash my site?

Whenever I alter (or even just resave without altering) a Perl file, it completely takes down our backend. I have no idea what the problem could be. Permissions are correct. Encoding is correct. Encoding is UTF-8. Transfer mode was ASCII.
I might not deal with Perl too much but I have no idea what the problem could be. The network admin hosting our website has no idea what the problem could be.
Text editors I tried: Dreamweaver, TextMate, Vim
Operating systems I tried: Mac OS X, Linux (Ubuntu)
FTP clients I tried: Transmit (Mac), Filezilla (Linux (Ubuntu))
It's not that it's bad code, I even tried to open and solely save and my backend still goes down.
The network admin told me that he ran the files through a dos2unix converter and it worked immediately. I of course tried this and it did not, more so it wouldn't make any sense, since I tried this in some of the most respected editors and I don't think it would make such drastic changes to the file type without any user input. (when I say respected editors Dreamweaver is not included in that sentiment).
I personally think it is some sort of server-side issue because I have crossed my t's and dotted my i's in regards to any possible client side issue but I have tried everything. Any opinions as to what the root of this problem is, and any possible solutions? Thanks in advance.
Try setting binary mode in your FTP client. That will allow you to experiment with different line endings (dos2unix) on the client side, without worrying about them being translated during transfer.
I've had this problem in the past and line-feeds were indeed the culprit.
Your editor and/or FTP program may be mangling the linefeeds.
Running dos2unix on the server is a good test as to the problem but not the cause.
Generate an MD5 hash of the file after each step in saving and transport to find where it changes.
You do not say what kind of framework/server you are using.
Maybe the server reloads the file while it is still being written by FTP or whatever? (I.e. that the file is not complete when the server reads it?)
Will a server restart fix the problem once the file is uploaded?
It sounds like you are using dos2unix before the transfer but the network admin is using it after. Perhaps it's doing something different in that case.
How many lines are in the file? What is the file size before and after you save it, after you transfer it, and after transfer and running dos2unix on it?
If this is just a line ending problem, you might point your network admin at http://www.perlmonks.org/?node_id=586942.
Response to rebra: No frameworks are used, and I don't know what kind of server this is on. This is basically a one man project on a shared host which was pretty horribly maintained and I'm trying to clean house.
Yeah that does make sense and I asked the server people about that, one of my first questions actually, but even if that is the case, I can't reboot via Plesk (kind of like cPanel). But thanks for that, you put into technical words/explanation what I was thinking of the whole time.