How to get the exact encoding time using compiled ffmpeg library - encoding

I have configured and compiled the ffmpeg source to enable hardware-accelerated encoding (h264_omx) on raspberry pi 4B.
Can anyone plz tell me how to get the exact encoding time from the script? can anyone point out which function or API call inside the ffmpeg code is responsible for hardware-accelerated encoding?

Related

RTSP Streaming on iOS 6 with Xcode 4.6.1

I need to develop an app what is capable of receiving a RTSP Stream.
I tried to find solutions/tutorials in the internet for the whole day now, but without any success.
I read a lot about using FFMPEG or Live555 (more FFMPEG, also I read that Live555 is not necessary when using the newest version of FFMPEG), but nowhere I looked it was described in a form I could understand, when I found questions on stackoverflow the answers were really short and I could not figure out what they tried to explain.
So now I need to ask myself.
I used "Homebrew" to download and install FFMPEG, now when I look at my dir /usr/local/
I can see this, the installed files are contained in subfolders of "Cellar"
I also tried to have a look at these projects:RTSPPlay by Mooncatventures and kxmovie by kolyvan.
I did not really figure out how to work with these projects, the Documentation is indefinite and "murky".
Well, when I tried to compile these projects the kxmovie failes with errors that are like "missing avformat.h",
I added the dylibs from the usr/local/cellar/ffmpeg/1.2.1/lib to the project but it seems that this is not the right method.
Nearly the same Issue with the RTSPPlay xcodeprj, it gives back the error that an "Entitlements.plist" is missing, after removing the linkings to that file completely I am getting 99+ Apple Mach-O Linker Errors, honestly I could not understand why.
I wanted to try the Live555 too but I cant see through all these obscure and confusing files, again I could not oversee the documentation and how to build the libraries for iphoneos (I read it is the easiest way to receive RTSP Stream but it was the same stack of confusing files as the other projects had)
Maybe if someone tried with these Projects or developed an Application himself could help me with his/her SourceCode or if somebody is seeing through all the Content of FFMPEG / Homebrew made dir's he/she could maybe explain me how to use it, that would probably help me and all the other desperate developers who are searching for a solution.
Just a little edit: I am trying to receive a RTSP H.264 decoded Video Stream.
Thanks in advance, Maurice Arikoglu.
(If you need any kind of SourceCode, Links, ScreenShots etc. please let me know)
For anyone who is searching for a working RTSP Player Library have a look at Durfu's Mooncat Fork I've found online, it is working perfectly fine on iOS 8. There is a sample code included, if you struggle with the implementation I can help you.
I've examined many of those projects that you mentioned, and experienced on those well..
You can use ffmpeg libraries to make a rtsp streaming client, as ffmpeg supports rtsp protocol... But there is an important issue which I've seen in my tests that ffmpeg's rtsp protocol has some important issues when using UDP transport layer for streaming... Even with the latest version (2.01) many RTP packets are dropping during streaming so images become glitchy sometimes... If you use TCP or http transport layer, then it works well...
For live555 project, this library works well with both UDP and TCP transports when streaming rtsp streams.. But ffmpeg is so much powerful/has many capabilities than live555.
If you decide to use ffmpeg then, basically, you should follow the below steps,
make a connection - avformat_open_input
find audio/video codecs - avcodec_find_decoder
open audio/video codecs - avcodec_open2
read encoded packets from file/network stream in a loop - av_read_frame
A.
a. decode audio packets - avcodec_decode_audio4
b. fill audio buffers with audiounit/audioqueue framework
B. a. decode video packets - avcodec_decode_video2
b. convert yuv pictures to rgb pictures - sws_scale or opengles shaders
good luck...

Raw H264 NALU hardware decode on iOS

I receive raw H.264 NALUs from an IP camera (via Live555) and I want to decode them using hardware because FFmpeg is great but it's too slow (the camera sensor is large).
The only solution I see is to write the NALUs to some movie container file such as MPEG-4, and then read and decode that file using an AVAssetReader.
Am I off in the weeds? Is anyone having success decoding H.264 NALUs from a stream? Does anyone have any tips for writing NALUs to an MPEG-4 file? Other ideas?
Like Matt mentioned, there is no direct access to Apple's H264 decoder.
However, I have had success with ffmpeg and h264 decoding. Like you mentioned, I have built ffmpeg with LGPL I was able to decode H264 streams all the way to real-time HD stream with no latency on both ipad and iphone. Nothing fancy is required from ffmpeg, you can find bunch of standard decoding c++ code that will work just fine on iOS. Also, in my case H264 NALUs were delivered via RTP/RTSP in real-time.
Also, if I was you I would run your app through xcode instruments to truly see where you bottleneck is, but I would be highly surprised it is in ffmpeg decoding step.. Hopefully this info helps.
Unfortunately, you cannot do this at present. Feel free to file a radar with Apple about wanting this sort of access to the hardware decoder. It'll certainly be resolved as a duplicate :-). I assume it is for licensing reasons why they can't give this sort of access to the hardware codec.
So, you're going to have to use a software decoder. Please be aware that if you're going to ship to the App Store then you need something with a non-GPL license (unless you want to open source your app as well).

“Hook” libMMS to FFmpeg for iPhone Streaming

These days, I was researching the software architechture for iPhone Streaming (Base on MMS protocol).
As we know, in order to playback MMS audio stream, we should call libMMS to read wma stream data from remote media server, and then call FFmpeg to decode the stream data from wma format into PCM data buffer, and finally, enqueue the PCM data buffer into iPhone’s audioqueue to generate real sound.
The introduction above just describe the working process of iPhone streaming. If we only need to implement this simple functionality, that is not difficult. Just follow the introduction above to call libMMS, FFMpeg and audioqueue step by step, we can achieve the streaming function. Actually, I have implemented the code last week.
But, what I need is not only a simple streaming function! I need a software architechture makes FFmpeg accessing libMMS just like accessing local filesystem!
Does anybody know how to hook the libMMS interfaces like mms_read/mms_seek onto FFmpeg filesystem interfaces like av_read_frame/av_seek_frame?
I think I have to answer my own question again this time……
After several weeks reseach and debuging, I got the truth finally.
Actually, we don’t need to “hook” libMMS onto FFMpeg. Why? Because the FFMpeg already has its native mms protocol process module “mms_protocol” (see in mms_protocol.c in FFMpeg).
All we need to do is just configuring the FFMpeg to enable the mms module like this (see in config.h in FFMpeg):
#define ENABLE_MMS_PROTOCOL 1
#define CONFIG_MMS_PROTOCOL 1
After this configuration, FFMpeg will add mms protocol into its protocol list. (Actually, the protocol list has already contained “local file system protocol”). As result, the FFMpeg could be able to treat the “mms://hostserver/abc” media file like local media file. Therefore, we can still open and read the mms media file using:
av_open_input_file();
av_read_frame();
like we did on local media file before!
By the way, in my ffmpeg version, there are still many bugs in libAVFormat module for proccessing mms protocol. It took one week for me to debug it, however, I think it will be much shorter for the guy as smart as you:-)

FFmpeg on iPhone

I have downloaded ffmpeg libraries for iPhone and compiled them. My objective is to create a movie file from a series of images using ffmpeg libraries.The amount of documentation for ffmpeg on iphone is very less. I checked an app called iFrameExtractor, which does the opposite of what i want, it extracts frames from a video.
On the command line there is a command called
ffmpeg -f image2 -i image%d.jpg video.mov
This turns a series of images into a video. I actually checked it on my mac and it works fine. What i wanted to know was how do we get the equivalent in iPhone. Or rather which class/api or method to call. There are a couple of examples of apps doing this on iPhone. Not sure whether they do it through ffmpeg though. Anyways, for reference "Time lapser" and "reel moments"
Thanks in advance
Do you need sound ?
If you do not need sound, you can try to use openCv.
I think it is easier to use. You just call a method to append a picture to the movie:
http://opencv.willowgarage.com/documentation/c/reading_and_writing_images_and_video.html
Someone please correct me if I'm wrong, but I don't believe iPhone apps can call external executables (which makes sense, that would be multitasking). You'll have to use libav* to accomplish what you're after. Compile ffmpeg for the iPhone:
http://github.com/gabriel/ffmpeg-iphone-build
then read the API docs at ffmpeg.org. For each library there should be an example file in the ffmpeg source that displays usage of that libraries core functions. i.e. in libavcodec, there is a file called api-example.c and in libavformat, there's one called output-example.c.

Best way to create FLV stream from screenshots

I want to create a FLV stream generated from images taken from my directx application, to end up on a webpage.
My current plan is (have been) to send screenshots as JPG:s from the dx app, to a client running on Linux. This client converts the JPG:s to a MJPEG stream. And ffmpeg converts the MJPEG stream to FLV - ending up in Flash Player in the browser.
Something like;
run dx app on windows machine, it listens for connection to send screenshot JPG:s to
on linux machine; ./jpg_to_mjpeg_client | ffmpeg -f mjpeg -i - output.flv
I thought the plan was good, but I'm stuck now. ffmpeg doesn't seem to handle the MJPEG stream coming from the client correctly. I used some code I found on the net for creating the MJPEG stream from the JPG:s, and I understand that there are no real specification for the MJPEG format so maybe they don't use the same MJPEG format or something.
Right now I'm sending [size of JPG buffer], [JPG buffer] for every frame from the dx app. I guess I could encode some stream there too somehow, but on the other hand I dont want to waste too much CPU on the rendering machine either.
How would you do it? Any tips are highly appreciated! Libraries/API:s to use, other solutions.. I don't have much experience of video encoding at all, but I know my ways around "general programming" pretty well.
C or C++ is preferred, but Java or Python might be OK too. I want it pretty fast though -
it has to be created in real time, one frame from the dx app should end up in the browser as soon as possible :-)
Oh, and in the future, the plan is it should be interactive so that I could communicate with/control the DX app from the webapp in the browser. Might be good to add that information too. Sort of like a web-based VCR and the movie is rendered in real-time from the DX app.
Thanks,
Use gstreamer on Linux. You can patch together almost any combination of inputs and outputs using whatever codecs you like. It is a bit of a hassle to learn.