powershell and console app output - powershell

Im trying to automate video conversion with powershell and ffmpeg tool.
Ffmpeg have detailed output about video if called without all nessesary parameters. Usually it reports about error and display input file info if one specified.
Ex I interactively executed such command:
d:\video.Enc\ffmpeg.exe -i d:\video.Enc\1.wmv
this is powershell console output
ffmpeg.exe : FFmpeg version SVN-r20428, Copyright (c) 2000-2009 Fabrice Bellard, et al.
row:1 char:24
+ d:\video.Enc\ffmpeg.exe <<<< -i d:\video.Enc\1.wmv
+ CategoryInfo : NotSpecified: (FFmpeg version ...Bel
lard, et al.:String) [], RemoteException
+ FullyQualifiedErrorId : NativeCommandError
built on Nov 1 2009 04:03:50 with gcc 4.2.4
configuration: --enable-memalign-hack --prefix=/mingw --cross-pre
fix=i686-mingw32- --cc=ccache-i686-mingw32-gcc --target-os=mingw32
--arch=i686 --cpu=i686 --enable-avisynth --enable-gpl --enable-vers
ion3 --enable-zlib --enable-bzlib --enable-libgsm --enable-libfaad
--enable-pthreads --enable-libvorbis --enable-libtheora --enable-li
bspeex --enable-libmp3lame --enable-libopenjpeg --enable-libxvid --
enable-libschroedinger --enable-libx264 --enable-libopencore_amrwb
--enable-libopencore_amrnb
libavutil 50. 3. 0 / 50. 3. 0
libavcodec 52.37. 1 / 52.37. 1
libavformat 52.39. 2 / 52.39. 2
libavdevice 52. 2. 0 / 52. 2. 0
libswscale 0. 7. 1 / 0. 7. 1
[wmv3 # 0x144dc00]Extra data: 8 bits left, value: 0
Seems stream 1 codec frame rate differs from container frame rate:
1000.00 (1000/1) -> 15.00 (15/1)
Input #0, asf, from 'd:\video.Enc\1.wmv':
Duration: 00:12:0
2.00, start: 5.000000, bitrate: 197 kb/s
Stream #0.0(eng): Audio: wmav2, 44100 Hz, 1 channels, s16, 48 k
b/s
Stream #0.1(eng): Video: wmv3, yuv420p, 1024x768, 137 kb/s, 15 tbr, 1k tbn, 1k tbc Metadata
title : Silverlight 2.0 Hello World Application
author : Sergey Pugachev
copyright :
comment :
WMFSDKVersion : 11.0.6001.7000
WMFSDKNeeded : 0.0.0.0000
IsVBR : 1
ASFLeakyBucketPairs:
VBR Peak : 715351
Buffer Average : 127036
At least one output file must be specified
But I cant figure how to script this and capture output to any kind of posh objects.
I tried direct script, wher ps1 file contained exact expression "d:\video.Enc\ffmpeg.exe -i d:\video.Enc\1.wmv" - it didnt work. Also i tried to do that with invoke-command and invoke expression. First one returns an exact string with command, second one - dump error to output console but not to -ErrorVariable i specified ( I did set all out variables, not only error one - all of them were empty).
Can anyone point to correct syntax for invoking console applications in posh and capturing output ?
Second one question will be about parsing that output - I'll need video resolution data to calculate correct aspect ratio for conversion. So it will be cool if anyone point how to work with captured error output and parse string like
Stream #0.1(eng): Video: wmv3, yuv420p, 1024x768,

Try redirecting the error stream to stdout like so and then you should be able to capture both stdout and stderr in a single variable e.g.:
$res = d:\video.Enc\ffmpeg.exe -i d:\video.Enc\1.wmv 2>&1
To capture the data try this:
$res | Select-String '(?ims)^Stream.*?(\d{3,4}x\d{3,4})' -all |
%{$_.matches} | %{$_.Groups[1].Value}
I'm not sure if $res will be one string or multiple but the above should work for both cases.

Related

Powershell script to get the metadata field "writing application"

I am using a modified version of the GetMetaData script originally written by Ed Wilson at Microsoft (https://devblogs.microsoft.com/scripting/hey-scripting-guy-how-can-i-find-files-metadata/) and then modified by user wOxxOm here https://stackoverflow.com/a/42933461/5061596 . I'm trying to analyze all my DVD and BluRay rips and see what tool was used to create them. Mainly I want to check which ones I compressed with Handbrake and which ones came directly from MakeMKV. The problem is I can't find this field.
If I use the "stock" scrip and change the number of properties it looks for from 0 - 266 up to 0 - 330 I find the extra file info like movie length, resolution, etc. But I can't find the tool used. For example here is what the MediaInfo Lite tool reports:
But looking through the meta data I get something like this with no "Writing application" property:
Name : Ad Astra (2019).mkv
Size : 44.1 GB
Title : Ad Astra
Length : 02:03:02
Frame height : 2160
Frame rate : ‎23.98 frames/second
Frame width : 3840
Total bitrate : ‎51415kbps
Audio tracks : TrueHD S24 7.1 [Eng]
Contains chapters : Yes
Subtitle tracks : PGS [Eng], PGS [Eng]
Video tracks : HEVC (H265 Main 10 #L5.1)
How do I go about finding that property or is it not something that I can pull through PowerShell?
Edit: The info I'm looking for IS in Windows Explorer looking at the properties of the file and the details tab so if Explorer can see it I would think I should be able to:
edit: actually, this seems more reliable. So far any file that mediainfo can read, this also works with.
$FILE = "C:\test.mkv"
$content = (Get-Content -Path $FILE -First 100) + (Get-Content -Path $FILE -Tail 100)
if(($content -match '\*data')[0] -match '\*data\W*([\w\n\s\.]*)'){
write-host "Writing Application:" $Matches[1]
exit
}elseif(($content -match 'M€.*WA(.*)s¤')[0] -match 'M€.*WA(.*)s¤'){
write-host "Writing Application:" $Matches[1]
}
It looks like the last bytes in the file after *data that specify the writer, so try this:
(Get-Content -Path "c:\video.mkv" -Tail 1) -match '\*data\W*(.*)$' | out-null
write-host "Writing Application:" $Matches[1]
On my test file that resulted in "HandBrake 1.5.1 2022011000"
I'm not sure what standard specifies this sorry. There's also a host of useful info on the first line of data in the file as well, e.g:
ftypmp42 mp42iso2avc1mp41 free6dÊmdat ôÿÿðÜEé½æÙH·–,Ø Ù#îïx264 - core 164 r3065 ae03d92 - H.264/MPEG-4 AVC codec - Copyleft 2003-2021 - http://www.videolan.org/x264.html - options: cabac=1 ref=1 deblock=1:0:0 analyse=0x1:0x111 me=hex subme=2 psy=1 psy_rd=1.00:0.00 mixed_ref=0 me_range=16 chroma_me=1 trellis=0 8x8dct=0 cqm=0 deadz
one=21,11 fast_pskip=1 chroma_qp_offset=0 threads=18 lookahead_threads=5 sliced_threads=0 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=3 b_pyramid=2 b_adapt=1 b_bias=0 direct=1 weightb=1 open_gop=0 weightp=1 keyint=250 keyint_min=25 scenecut=40 intra_refresh=0 rc_lookahead=10 rc=crf mbtree=1 crf=23.0 qcomp=0.60 qpmin
=0 qpmax=69 qpstep=4 vbv_maxrate=14000 vbv_bufsize=14000 crf_max=0.0 nal_hrd=none filler=0 ip_ratio=1.40 aq=1:1.00
I couldn't replicate your success viewing the info with Windows Explorer, the field is invisible for me even though I can view it with MediaInfo etc

Gstreamer1.0 missing plugin: decodebin2 in Python code

The following Python code that adds three files to a GES timeline throws up the following error that others have also had:
(GError('Your GStreamer installation is missing a plug-in.',), 'gstdecodebin2.c(3928): gst_decode_bin_expose (): /GESPipeline:gespipeline0/GESTimeline:gestimeline0/GESVideoTrack:gesvideotrack0/GnlComposition:gnlcomposition1/GnlSource:gnlsource0/GstBin:videosrcbin/GstURIDecodeBin:uridecodebin0/GstDecodeBin:decodebin4:\nno suitable plugins found')
from gi.repository import GES
from gi.repository import GstPbutils
from gi.repository import Gtk
from gi.repository import Gst
from gi.repository import GObject
import sys
import signal
VIDEOPATH = "file:///path/to/my/video/folder/"
class Timeline:
def __init__(self, files):
print Gst._version # prints 1
self.pipeline = GES.Pipeline()
container_caps = Gst.Caps.new_empty_simple("video/quicktime")
video_caps = Gst.Caps.new_empty_simple("video/x-h264")
audio_caps = Gst.Caps.new_empty_simple("audio/mpeg")
self.container_profile = GstPbutils.EncodingContainerProfile.new("jane_profile", "mp4 concatation", container_caps, None )#Gst.Caps("video/mp4", None))
self.video_profile = GstPbutils.EncodingVideoProfile.new(video_caps, None, None, 0)
self.audio_profile = GstPbutils.EncodingAudioProfile.new(audio_caps, None, None, 0)
self.container_profile.add_profile(self.video_profile)
self.container_profile.add_profile(self.audio_profile)
self.bus = self.pipeline.get_bus()
self.bus.add_signal_watch()
self.bus.connect("message", self.busMessageCb)
self.timeline = GES.Timeline.new_audio_video()
self.layer = self.timeline.append_layer()
signal.signal(signal.SIGINT, self.handle_sigint)
self.start_on_timeline = 0
for file in files:
asset = GES.UriClipAsset.request_sync(VIDEOPATH + file)
print asset.get_duration()
duration = asset.get_duration()
clip = self.layer.add_asset(asset, self.start_on_timeline, 0, duration, GES.TrackType.UNKNOWN)
self.start_on_timeline += duration
print 'start:' + str(self.start_on_timeline)
self.timeline.commit()
self.pipeline.set_timeline(self.timeline)
def handle_sigint(self, sig, frame):
Gtk.main_quit()
def busMessageCb(self, unused_bus, message):
print message
print message.type
if message.type == Gst.MessageType.EOS:
print "eos"
Gtk.main_quit()
elif message.type == Gst.MessageType.ERROR:
error = message.parse_error()
print (error)
Gtk.main_quit()
if __name__=="__main__":
GObject.threads_init()
Gst.init(None)
GES.init()
gv = GES.version() # prints 1.2
timeline = Timeline(['one.mp4', 'two.mp4', 'two.mp4'])
done = timeline.pipeline.set_render_settings('file:///home/directory/output.mp4', timeline.container_profile)
print 'done: {0}'.format(done)
timeline.pipeline.set_mode(GES.PipelineFlags.RENDER)
timeline.pipeline.set_state(Gst.State.PAUSED)
Gtk.main()
I have set the GST_PLUGIN_PATH_1_0 environment variable to "/usr/local/lib:/usr/local/lib/gstreamer-1.0:/usr/lib/x86_64-linux-gnu:/usr/lib/i386-linux-gnu/gstreamer-1.0"
I compiled and installed gstreamer1.0-1.2.4, together with the base, good, bad and ugly packages for that version. GES is installed with version 1.2.1 as this was the nearest to the gstreamer version I found. I also installed the libav-1.2.4.
The decodebin2 should be in base according to the make install log for plugin-base and is linked into libgstplayback, which is part of my GST_PLUGIN_PATH_1_0:
/usr/local/lib/gstreamer-1.0 libgstplayback_la-gstdecodebin2.lo
I do have gstreamer0.10 and the decodebin2 is there as a blacklisted version when I do 'gst-inspect-1.0 -b' as it sits in the gstreamer0.10 library path rather than on that for 1.0.
I tried clearing the ~/.cache/gstreamer files and running gst-inspect-1.0 again to regenerate the plugin registry but I still keep getting the error in the Python code. This sample code might be wrong as it is my first stab at writing a timeline using Gstreamer editing services. I am on Ubuntu Trusty or 14.04.
The file is an mp4 file which is why I installed gst-libav for the required libraries.
The output of MP4Box -info on the file is:
Movie Info *
Timescale 90000 - Duration 00:00:08.405
Fragmented File no - 2 track(s)
File suitable for progressive download (moov before mdat)
File Brand mp42 - version 0
Created: GMT Mon Aug 17 17:02:26 2015
File has no MPEG4 IOD/OD
Track # 1 Info - TrackID 1 - TimeScale 50000 - Duration 00:00:08.360
Media Info: Language "English" - Type "vide:avc1" - 209 samples
Visual Track layout: x=0 y=0 width=1920 height=1080
MPEG-4 Config: Visual Stream - ObjectTypeIndication 0x21
AVC/H264 Video - Visual Size 1920 x 1080
AVC Info: 1 SPS - 1 PPS - Profile Main # Level 4.2
NAL Unit length bits: 32
Pixel Aspect Ratio 1:1 - Indicated track size 1920 x 1080
Self-synchronized
Track # 2 Info - TrackID 2 - TimeScale 48000 - Duration 00:00:08.405
Media Info: Language "English" - Type "soun:mp4a" - 394 samples
MPEG-4 Config: Audio Stream - ObjectTypeIndication 0x40
MPEG-4 Audio MPEG-4 Audio AAC LC - 2 Channel(s) - SampleRate 48000 Synchronized on stream 1
log # pastebin.com/BjJ8Z5Bd for when I run 'GST_DEBUG=3,gnl*:5 python ./timeline1.py > timeline1.log 2>&1'
There is no "decodebin2" in GStreamer 1.x, which you're using here. It's just called "decodebin" now and is equivalent to "decodebin2" in 0.10.
Your problem here however is not that decodebin is not found. Your problem is that you're missing a plugin to play this specific media file. What kind of media file is it?

Systemtap %M printf format only returns one character

I'm trying to print the data received on a socket - the contents of ubuf on the return of sys_recv. I cant get the %M format specifier to work properly. Can someone please explain how to use it properly. Thanks
stap -L 'kernel.function("sys_recv#net/socket.c")'
kernel.function("sys_recv#net/socket.c:1800") $fd:int $ubuf:void* $size:size_t $flags:unsigned int
using this probe:
[laris#kakitis stap]$ cat socket-recv.stp
#! /usr/bin/env stap
probe kernel.function("sys_recv#net/socket.c").return {
if (pid() == target())
printf ("%s fd %d size %d ubuf %p %10M \n ", ppfunc(),$fd,$size,$ubuf,$ubuf)
}
From my reading of the man page the format %10M should return 10 bytes from the location pointed to by $ubuf:void but I only get 1. Adjusting the parameter 10 shifts the one character output rather than showing more or less memory
[root#kakitis stap]# stap -x 16796 socket-recv.stp
sys_recv fd 13 size 64071 ubuf 0x86ceca0 70
sys_recv fd 13 size 62679 ubuf 0x86cf210 50
Changing 10 to 2 gives this
[root#kakitis stap]# stap -x 16796 socket-recv.stp
sys_recv fd 13 size 64071 ubuf 0x86ceca0 70
sys_recv fd 13 size 62679 ubuf 0x86cf210 50
System particulars are:
[laris#kakitis stap]$ stap --version
Systemtap translator/driver (version 2.1/0.154, rpm 2.1-2.fc17)
Copyright (C) 2005-2013 Red Hat, Inc. and others
This is free software; see the source for copying conditions.
enabled features: AVAHI LIBRPM LIBSQLITE3 NSS TR1_UNORDERED_MAP NLS
[laris#kakitis stap]$ uname -a
Linux kakitis 3.4.33 #1 SMP Tue Jan 7 14:15:58 EST 2014 i686 i686 i386 GNU/Linux
[laris#kakitis stap]$ cat /etc/redhat-release
Fedora release 17 (Beefy Miracle)
Don't confuse the output-width and precision parameters for printf(). The following will do what you meant:
printf ("%33.10M", $pointer)
to print 10 bytes (20 hex characters) in a 33-character-wide output field. One or both numbers can be replaced by *, so that the respective widths are passed as parameters before the $pointer. The upstream man page has been updated with an example.

ffmpeg API h264 encoded video does not play on all platforms

Edit: In the previous version I used a very old ffmpeg API. I now use the newest libraries. The problem has only changed slightly, from "Main" to "High".
I am using the ffmpeg C API to create a mp4 video in C++.
I want the resulting video to be of the profile "Constrained Baseline", so that the resulting video can be played on as much platforms as possible, especially mobile, but I get "High" profile every time, even though I hard coded the codec profile to be FF_PROFILE_H264_CONSTRAINED_BASELINE. As a result, the video does not play on all our testing platforms.
This is what "ffprobe video.mp4 -show_streams" tells about my video streams:
Metadata:
major_brand : isom
minor_version : 512
compatible_brands: isomiso2avc1mp41
creation_time : 1970-01-01 00:00:00
encoder : Lavf53.5.0
Duration: 00:00:13.20, start: 0.000000, bitrate: 553 kb/s
Stream #0:0(und): Video: h264 (Main) (avc1 / 0x31637661), yuv420p, 320x180,
424 kb/s, 15 fps, 15 tbr, 15 tbn, 30 tbc
Metadata:
creation_time : 1970-01-01 00:00:00
handler_name : VideoHandler
Stream #0:1(und): Audio: aac (mp4a / 0x6134706D), 44100 Hz, stereo, s16, 12
kb/s
Metadata:
creation_time : 1970-01-01 00:00:00
handler_name : SoundHandler
-------VIDEO STREAM--------
[STREAM]
index=0
codec_name=h264
codec_long_name=H.264 / AVC / MPEG-4 AVC / MPEG-4 part 10
profile=High <-- This should be "Constrained Baseline"
codec_type=video
codec_time_base=1/30
codec_tag_string=avc1
codec_tag=0x31637661
width=320
height=180
has_b_frames=0
sample_aspect_ratio=N/A
display_aspect_ratio=N/A
pix_fmt=yuv420p
level=30
timecode=N/A
is_avc=1
nal_length_size=4
id=N/A
r_frame_rate=15/1
avg_frame_rate=15/1
time_base=1/15
start_time=0.000000
duration=13.200000
bit_rate=424252
nb_frames=198
nb_read_frames=N/A
nb_read_packets=N/A
TAG:creation_time=1970-01-01 00:00:00
TAG:language=und
TAG:handler_name=VideoHandler
[/STREAM]
-------AUDIO STREAM--------
[STREAM]
index=1
codec_name=aac
codec_long_name=Advanced Audio Coding
profile=unknown
codec_type=audio
codec_time_base=1/44100
codec_tag_string=mp4a
codec_tag=0x6134706d
sample_fmt=s16
sample_rate=44100
channels=2
bits_per_sample=0
id=N/A
r_frame_rate=0/0
avg_frame_rate=0/0
time_base=1/44100
start_time=0.000000
duration=13.165714
bit_rate=125301
nb_frames=567
nb_read_frames=N/A
nb_read_packets=N/A
TAG:creation_time=1970-01-01 00:00:00
TAG:language=und
TAG:handler_name=SoundHandler
[/STREAM]
This is the function I use to add a video stream. All the values that come from ptr-> are defined from outside, do those values have to be specific values to get the correct profile?:
static AVStream *add_video_stream( Cffmpeg_dll * ptr, AVFormatContext *oc, enum CodecID codec_id )
{
AVCodecContext *c;
AVStream *st;
AVCodec* codec;
// Get correct codec
codec = avcodec_find_encoder(codec_id);
if (!codec) {
av_log(NULL, AV_LOG_ERROR, "%s","Video codec not found\n");
exit(1);
}
// Create stream
st = avformat_new_stream(oc, codec);
if (!st) {
av_log(NULL, AV_LOG_ERROR, "%s","Could not alloc stream\n");
exit(1);
}
c = st->codec;
/* Get default values */
codec = avcodec_find_encoder(codec_id);
if (!codec) {
av_log(NULL, AV_LOG_ERROR, "%s","Video codec not found (default values)\n");
exit(1);
}
avcodec_get_context_defaults3(c, codec);
c->codec_id = codec_id;
c->codec_type = AVMEDIA_TYPE_VIDEO;
c->bit_rate = ptr->video_bit_rate;
av_log(NULL, AV_LOG_ERROR, " Bit rate: %i", c->bit_rate);
c->qmin = ptr->qmin;
c->qmax = ptr->qmax;
c->me_method = ptr->me_method;
c->me_subpel_quality = ptr->me_subpel_quality;
c->i_quant_factor = ptr->i_quant_factor;
c->qcompress = ptr->qcompress;
c->max_qdiff = ptr->max_qdiff;
// We need to set the level and profile to get videos that play (hopefully) on all platforms
c->level = 30;
c->profile = FF_PROFILE_H264_CONSTRAINED_BASELINE;
c->width = ptr->dstWidth;
c->height = ptr->dstHeight;
c->time_base.den = ptr->fps;
c->time_base.num = 1;
c->gop_size = ptr->fps;
c->pix_fmt = STREAM_PIX_FMT;
c->max_b_frames = 0;
// some formats want stream headers to be separate
if(oc->oformat->flags & AVFMT_GLOBALHEADER)
c->flags |= CODEC_FLAG_GLOBAL_HEADER;
return st;
}
Additional info:
As a reference video, I use the gizmo.mp4 that Mozilla serves as an example that plays on every platform/browser. It definitely has the "Constrained Baseline" profile, and definitely works on all our testing smartphones. You can download it here. Our self-created video doesn't work on all platforms and I'm convinced this is because of the profile.
I am also using qt-faststart.exe to move the headers to the start of the file after creating the mp4, as this cannot be done in a good way in C++ directly. Could that be the problem?
Obviously, I am doing something wrong, but I don't know what it could be. I'd be thankful for every hint ;)
I have the solution. After spending some time and discussions in the ffmpeg bug tracker and browsing for profile setting examples, I finally figured out the solution.
One needs to use av_opt_set(codecContext->priv_data, "profile", "baseline" (or any other desired profile), AV_OPT_SEARCH_CHILDREN)
So in my case that would be:
Wrong:
// We need to set the level and profile to get videos that play (hopefully) on all platforms
c->level = 30;
c->profile = FF_PROFILE_H264_CONSTRAINED_BASELINE;
Correct:
// Set profile to baseline
av_opt_set(c->priv_data, "profile", "baseline", AV_OPT_SEARCH_CHILDREN);
Completely unintuitive and contrary to the rest of the API usage, but that's ffmpeg philosophy. You don't need to understand it, you just need to understand how to use it ;)

Using FFMPEG to reliably convert videos to mp4 for iphone/ipod and flash players

I need to convert videos for use in both a flash player and the iphone/ipod touch. I'm using the following batch script with ffmpeg:
#echo off
ffmpeg.exe -i %1 -s qvga -acodec libfaac -ar 22050 -ab 128k -vcodec libx264 -threads 0 -f ipod %2
This always outputs an mp4 file, and I can always play it on my PC. The videos also seem to play fine on my iphone 3GS. But with some input files it won't work for older iphone versions (3G and iPod touch).
Here's the ffmpeg output from one such file:
D:\ffmpeg>encode.bat d:\temp\recording.flv d:\temp\out.m4v
FFmpeg version SVN-r18709, Copyright (c) 2000-2009 Fabrice Bellard, et al.
configuration: --enable-memalign-hack --prefix=/mingw --cross-prefix=i686-ming
w32- --cc=ccache-i686-mingw32-gcc --target-os=mingw32 --arch=i686 --cpu=i686 --e
nable-avisynth --enable-gpl --enable-zlib --enable-bzlib --enable-libgsm --enabl
e-libfaac --enable-libfaad --enable-pthreads --enable-libvorbis --enable-libtheo
ra --enable-libspeex --enable-libmp3lame --enable-libopenjpeg --enable-libxvid -
-enable-libschroedinger --enable-libx264
libavutil 50. 3. 0 / 50. 3. 0
libavcodec 52.27. 0 / 52.27. 0
libavformat 52.32. 0 / 52.32. 0
libavdevice 52. 2. 0 / 52. 2. 0
libswscale 0. 7. 1 / 0. 7. 1
built on Apr 28 2009 04:04:42, gcc: 4.2.4
[flv # 0x187d650]skipping flv packet: type 18, size 164, flags 0
Input #0, flv, from 'd:\temp\recording.flv':
Duration: 00:00:07.17, start: 0.001000, bitrate: N/A
Stream #0.0: Video: flv, yuv420p, 320x240, 1k tbr, 1k tbn, 1k tbc
Stream #0.1: Audio: nellymoser, 44100 Hz, mono, s16
[libx264 # 0x13518b0]using cpu capabilities: MMX2 SSE2Fast SSSE3 FastShuffle SSE
4.2
[libx264 # 0x13518b0]profile Baseline, level 4.2
Output #0, ipod, to 'd:\temp\out.m4v':
Stream #0.0: Video: libx264, yuv420p, 320x240, q=2-31, 200 kb/s, 1k tbn, 1k
tbc
Stream #0.1: Audio: libfaac, 22050 Hz, mono, s16, 128 kb/s
Stream mapping:
Stream #0.0 -> #0.0
Stream #0.1 -> #0.1
Press [q] to stop encoding
frame= 90 fps= 0 q=-1.0 Lsize= 128kB time=6.87 bitrate= 152.4kbits/s
video:92kB audio:32kB global headers:1kB muxing overhead 2.620892%
[libx264 # 0x13518b0]slice I:8 Avg QP:29.62 size: 7047
[libx264 # 0x13518b0]slice P:82 Avg QP:30.83 size: 467
[libx264 # 0x13518b0]mb I I16..4: 17.9% 0.0% 82.1%
[libx264 # 0x13518b0]mb P I16..4: 0.6% 0.0% 0.0% P16..4: 23.1% 0.0% 0.0%
0.0% 0.0% skip:76.3%
[libx264 # 0x13518b0]final ratefactor: 57.50
[libx264 # 0x13518b0]SSIM Mean Y:0.9544735
[libx264 # 0x13518b0]kb/s:8412.6
My suspicion is that it has something to do with the audio encoding. If so, does anyone know how to force it to reencode the audio to the proper format?
Any other ideas?
WARNING: this answer is 10 years old and reported not to work anymore.
I think the issue is the H.264 level being level 4.2.
Some of the Apple devices only support up to 3.0.
Here's the FFMPEG settings I usually use:
ffmpeg -i YOUR-INPUT.wmv -s qvga -b 384k -vcodec libx264 -r 23.976 -acodec libfaac -ac 2 -ar 44100 -ab 64k -vpre baseline -crf 22 -deinterlace -o YOUR-OUTPUT.MP4
You can adjust the rate, size and bitrate as needed. The important settings are in the baseline config param.
The ffmpeg wiki provides some useful up to date guidance on how to encode H.264 for particular devices. Here's an excerpt from Apple's docs with corresponding profiles:
iOS Compatability
Profile Level Devices Options
Baseline 3.0 All devices -profile:v baseline -level 3.0
Baseline 3.1 iPhone 3G and later, iPod touch 2nd generation and later -profile:v baseline -level 3.1
Main 3.1 iPad (all vers), Apple TV 2 and later, iPhone 4 and later -profile:v main -level 3.1
Main 4.0 Apple TV 3 and later, iPad 2 and later, iPhone 4s and later -profile:v main -level 4.0
High 4.0 Apple TV 3 and later, iPad 2 and later, iPhone 4s and later -profile:v high -level 4.0
High 4.1 iPad 2 and later, iPhone 4s and later, iPhone 5c and later -profile:v high -level 4.1
High 4.2 iPad Air and later, iPhone 5s and later -profile:v high -level 4.2
ffmpeg -i test.mov -profile:v baseline -level 3.0 test.mp4
This disables some features but offers greater compatibility.
Also, here are some useful optional tags to add for working with the quality and file size:
-preset: ultrafast, superfast, veryfast, faster, fast, medium, slow, slower, veryslow, placebo
-crf: 0-51
(preset modifies how long it takes to compress your video, with faster getting a bigger file size, and slower getting a smaller file size, whereas crf modifies the video quality, with higher quality having a bigger file size, and lower quality having a smaller file size.)
The listed ffmpeg settings didn't work for me (I don't seem to have the "baseline" preset listed), ffmpeg settings that don't reference baseline, I posted over here: iPhone "cannot play" .mp4 H.264 video file
Spoiler:
ffmpeg -i INPUT -s 320x240 -r 30000/1001 -b 200k -bt 240k -vcodec libx264 -coder 0 -bf 0 -refs 1 -flags2 -wpred-dct8x8 -level 30 -maxrate 10M -bufsize 10M -acodec libfaac -ac 2 -ar 48000 -ab 192k OUTPUT.mp4
The official Apple reference on the subject: http://developer.apple.com/library/safari/#documentation/AppleApplications/Reference/SafariWebContent/CreatingVideoforSafarioniPhone/CreatingVideoforSafarioniPhone.html
Try this python script.
I wrote it for myself. Maybe you will find it useful too. It converts files to mp4.
Because of SO rules here the complete source code:
#!/usr/bin/python
# Copyright (C) 2007-2010 CDuke
# This program is free software. You may distribute it under the terms of
# the GNU General Public License as published by the Free Software
# Foundation, version 2.
#
# This program is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General
# Public License for more details.
#
# This program converts video files to mp4, suitable to be played on an iPod
# or an iPhone. It is careful about maintaining the proper aspect ratio.
from __future__ import division
from datetime import datetime
import sys
import argparse
import os
import re
import shlex
import time
from subprocess import Popen, PIPE
DEFAULT_ARGS = '-f mp4 -y -vcodec libxvid -maxrate 1000k -mbd 2 -qmin 3 -qmax 5 -g 300 -bf 0 -acodec libfaac -ac 2 -flags +mv4 -trellis 2 -cmp 2 -subcmp 2'
#DEFAULT_ARGS = '-f mp4 -y -vcodec mpeg4 -vtag xvid -maxrate 1000k -mbd 2 -qmin 3 -qmax 5 -g 300 -bf 0 -acodec libfaac -ac 2 -r 30000/1001 -flags +mv4 -trellis 2 -cmp 2 -subcmp 2'
#DEFAULT_ARGS = '-y -f mp4 -vcodec libxvid -acodec libfaac'
DEFAULT_BUFSIZE = '4096k'
DEFAULT_AUDIO_BITRATE = '128k'
DEFAULT_VIDEO_BITRATE = '400k'
FFMPEG = '/usr/bin/ffmpeg'
class device:
'''Describe properties of device'''
def __init__(self, name, width, height):
self.name = name
self.width = width
self.height = height
class videoFileInfo:
def __init__(self, width, height, duration):
self.width = width
self.height = height
self.duration = duration
devices = [device('ipod', 320, 240), device('iphone', 480, 320),
device('desire', 800, 480)]
def getOutputFileName(inputFileName, outDir):
if outDir == None:
outFileName = os.path.splitext(inputFileName)[0] + '.mp4'
else:
outFileName = os.path.join(outDir, os.path.basename(inputFileName))
return outFileName
def getVideoFileInfo(fileName):
p = Popen([FFMPEG, '-i', fileName], stdout = PIPE, stderr = PIPE)
fileInfo = p.communicate()[1]
videoRes = re.search(b'Video:.+ (\d+)x(\d+)', fileInfo)
w = float(videoRes.group(1))
h = float(videoRes.group(2))
duratMatch = re.search(b'Duration:\s+(\d+):(\d+):(\d+)\.(\d+)', fileInfo)
duration = float(duratMatch.group(1)) * 3600
duration += float(duratMatch.group(2)) * 60
duration += float(duratMatch.group(3))
duration += float(duratMatch.group(4)) / 10
fileInfo = videoFileInfo(w, h, duration)
return fileInfo
def getArguments(width, height, aspect):
args = {}
w = width
h = w // aspect
h -= (h % 2)
if h <= height:
pad = (height - h) // 2
pad -= (pad % 2)
pady = pad
padx = 0
else:
# recalculate using the height as the baseline rather than the width
h = height
w = int(h * aspect)
width -= (width % 2)
pad = (width - w) // 2
pad -= (pad % 2)
padx = pad
pady = 0
args['width'] = w
args['height'] = h
args['padx'] = padx
args['pady'] = pady
return args
def getProgressBar(perc):
convInfo = 'Converted: [{}] {:.2%} \r'
num_hashes = round(perc * 100 // 2)
bar = '=' * num_hashes + ' ' * (50 - num_hashes)
return convInfo.format(bar, perc)
def convert(inputFileName, outputFileName, args, audioBitrate, videoBitrate, devWidth, devHeight, aspect, duration):
cmd = '{ffmpeg} -i {inFile} {defaultArgs} -bufsize {bufsize} -s {width}x{height} -vf "pad={devWidth}:{devHeight}:{padx}:{pady},aspect={aspect}" -ab {audioBitrate} -b {videoBitrate} {outFile}'.format(ffmpeg=FFMPEG, inFile=inputFileName, defaultArgs=DEFAULT_ARGS, bufsize=DEFAULT_BUFSIZE, devWidth=devWidth, devHeight=devHeight, padx=args['padx'], pady=args['pady'], width=args['width'], height=args['height'], aspect=aspect, audioBitrate=audioBitrate, videoBitrate=videoBitrate, outFile=outputFileName)
# cmd = '{ffmpeg} -i {inFile} {defaultArgs} -bufsize {bufsize} -s {width}x{height} -ab {audioBitrate} -b {videoBitrate} {outFile}'.format(ffmpeg=FFMPEG, inFile=inputFileName, defaultArgs=DEFAULT_ARGS, bufsize=DEFAULT_BUFSIZE, width=args['width'], height=args['height'], audioBitrate=audioBitrate, videoBitrate=videoBitrate, outFile=outputFileName)
print(cmd)
print()
start = datetime.today()
print('Converting started at ' + str(start))
conv = Popen(shlex.split(cmd), shell=False, stdout=PIPE, stderr=PIPE)
while conv.poll() is None:
out = os.read(conv.stderr.fileno(), 2048)
last = out.splitlines()[-1]
timeMatch = re.search(b'time=([^\s]+)', last)
if timeMatch:
timeDone = float(timeMatch.group(1))
perc = timeDone / duration
if sys.version_info > (3, 0):
exec("print(getProgressBar(perc), end='')")
else:
exec("print getProgressBar(perc),")
sys.stdout.flush()
# else:
# print(out)
time.sleep(0.5)
print(getProgressBar(1))
end = datetime.today()
print('Converting ended at ' + str(end))
print('Spended time: ' + str(end - start))
class mp4Converter(argparse.Action):
def __call__(self, parser, namespace, values, option_string = None):
outdir = namespace.outdir
for f in values:
outFileName = getOutputFileName(f.name, outdir)
fileInfo = getVideoFileInfo(f.name)
aspect = fileInfo.width / fileInfo.height
dev = next(d for d in devices if d.name == namespace.device)
args = getArguments(dev.width, dev.height, aspect)
convert(f.name, outFileName, args, namespace.AUDIO_BITRATE, namespace.VIDEO_BITRATE, dev.width, dev.height, aspect, fileInfo.duration)
print('file "{0}" converted successful'.format(f.name))
opts = argparse.ArgumentParser(
description = 'Converter to MP4',
epilog = 'made by CDuke 2010')
opts.add_argument('-V','--version',
action = 'version',
version = '0.0.1')
opts.add_argument('-v', '--verbose',
action = 'store_true',
default = False,
help = 'verbose')
opts.add_argument('-a', '--audio',
dest = 'AUDIO_BITRATE',
default = DEFAULT_AUDIO_BITRATE,
help = 'override default audio bitrate {0}'.format(DEFAULT_AUDIO_BITRATE))
opts.add_argument('-b', '--video',
dest = 'VIDEO_BITRATE',
default = DEFAULT_VIDEO_BITRATE,
help = 'override default video bitrate {0}'.format(DEFAULT_VIDEO_BITRATE))
opts.add_argument('-d', '--device',
choices = [d.name for d in devices],
default = 'ipod',
help = 'device that will play video')
opts.add_argument('-o', '--outdir',
help = 'write files to given directory')
opts.add_argument('file',
nargs = '+',
type = argparse.FileType('r'),
action = mp4Converter,
help = 'file that will be converted')
opts.parse_args()
ffmpeg -i input.mov -c:v libx264 -pix_fmt yuv420p -profile:v main -crf 1 -preset medium -c:a aac -movflags +faststart output.mp4
ffmpeg.exe -i "Video.mp4" -vcodec libx264 -preset fast -profile:v baseline -lossless 1 -vf "scale=720:540,setsar=1,pad=720:540:0:0" -acodec aac -ac 2 -ar 22050 -ab 48k "Video (SD).mp4"
I had the same problem. I wanted to convert mainly videos for an iPod 5G. Every Info I found was either outdated or did not work for me.
So finally I stumbled upon some parameters that finally worked:
ffmpeg -i "INPUTFILE" \
-f mp4 -vcodec mpeg4 \
-vf scale=-2:320 \
-maxrate 1536k -b:v 768 -qmin 3 -qmax 5 -bufsize 4096k -g 300 \
-c:a aac -b:a 128k -ar 44100 -ac 2 \
"OUTPUTFILE.mp4"
Some remarks:
I let ffmpeg scale it to a height of 320, because for me this is the sweet spot. The files are not too big, and they also sync to an iPod Touch which has a Screen of 480x320, which then uses the full video. The Screen of the iPod is 320x240, so it could also be scaled to 240 height. The iPod seems support up to 480p, which is nice if you want to output the video to some other source.
The Audio quality can be changed if needed.
I did not try touching the other parameters as they work fine and produce decent results.
EDIT
I modified the command a tiny bit, as I had problems with some files which had audio tracks encoded in 48khz and more than 2 channels.
Got here because the simplest ffmpeg conversion approach was not producing an mp4 that would play on iOS for some reason.
Found settings that work for me in 2019 here:
https://gist.github.com/jaydenseric/220c785d6289bcfd7366
ffmpeg -i input.mov -c:v libx264 -pix_fmt yuv420p -profile:v baseline -level 3.0 -crf 22 -preset veryslow -vf scale=1280:-2 -c:a aac -strict experimental -movflags +faststart -threads 0 output.mp4