I have a task where the sound is longer and the image should come later.
Actually, this is exactly what I want, but it's a bit different: ffmpeg : mix audio and video of different length
First of all, I leave the picture I draw below to be understood.
Click to see the image.
By doing this I am thinking of catching sync. With which ffmpeg command can I do this?
Thank you.
I had to use offset.
I solved it this way. I will automatically add the corresponding time interval according to the difference between video and audio. I will use the programming language for this.
ffmpeg -i finalsound.flv -itsoffset 00:00:44 -i finalvideo.flv -c copy final.flv
Thank you.
Related
I’m working on a Matlab application that uses a VLC class to control a VLC-instance. One of the features is to set the VLC player to fullscreen. This feature works perfectly fine.
The VLC player is downloaded from Matlab’s File Exchange: https://se.mathworks.com/matlabcentral/fileexchange/56215-vlc (Thanks a lot Léa Strobino)
However, one particular clip insists on resizing the player to a smaller size.
I have done some research and it turns out that this is a common problem in some VLC versions.
Normal workarounds are to uncheck the “adapt interface to video size” (something like that) and to check the “Fullscreen” box.
This ought to make the player open in fullscreen and not resize the screen to video size. The video still resizes the player to a smaller size.
All the specs of the clips are the same: Same file extension (.vob), formats and were made the same way (I did some video trimming and such using ffmpeg – but the same way every time).
I have noticed one difference and that is that this particular video has a lower Data and bitrate (~1000-1500kbps) where as the others are higher (<4000kbps). Also when showing the properties of the clip the frame height and width are blank as opposed to the others that have specific values.
This should however not have an effect of the fullscreen command from Matlab called after loading the video into the playlist. The command has no effect on this video, but does on all other.
It is possible to set the player to fullscreen manually by clicking the window, so it is not caused by some restriction in the video not allowing it to fullscreen.
Why does the video refuse to go in to fullscreen?
Hope somebody is able to help.
Okay so I seem to have solved the problem now. Without being completely sure why - the problem was in the lowered data/framerate.
I tried to add -crf 18 when converting my .mp4 to a .vob file:
ffmpeg -i input.mp4 -vcodec copy -acodec ac3 -crf 18 output.vob
The -crf stands for Constant Rate Factor and is a way to ensure a specific Data rate. The values goes from 0-51 and 18 seems to be the lowest 'sane' value (highest data rate). A good explanation can be found here: https://superuser.com/questions/677576/what-is-crf-used-for-in-ffmpeg
With this higher data rate the video opens up in fullscreen everytime :=)
I am working on a project in which I am receiving raw frames from some input video devices. I am trying to write those frames in a video files using FFMPEG library.
I have no control over the frame rate i am getting from my input sources. This frame rate varies in run-time also.
Now my problem is how do i sync between recorded video and coming video. Depending upon frame-rate i set in FFMPEG and actual frame rate that i am receiving playback of recorded video is either fast or slow than input video.
I tried to add timestamps (as numOfFrames) in encoded video as per following link
but that didn't help.
ffmpeg speed encoding problem
Please tell me a way to synchronize both. This is my first time with FFMPEG or any multimedia libraries so any examples will be highly appreciated.
I am using directshow ISampleGrabber interface to capture those frames.
Thank You
So Finally i figured out how to do this. Here is how..
First i was taking preview from PREVIEW pin of source filter which do not give timestamps to frames. So one should take frames from capture pin of the source filter. Than in SampleCB callback function we cant get time using IMediaSample::GetTime(). But this function will return time in unit of 100ns. FFMPEG requires it in units of 1/time_base. Here time_base is desired frame rate.
So directshow timestamp needs to be converted in FFMPEG units first. Than we can set pts in AVFrame::pts variable of ffmpeg. One more thing that needs to be considered is first frame of video shoul have timestamp of 0 in FFMPEG so that needs to be taken care of while converting from directshow timestamp to FFMPEG one.
Thank You
I'd like to change the pitch of an audio file by changing the sample rate programmatically. I am recording the file using AVAudioRecorder. I have noticed a settings parameter within AVAudioPlayer, however, it is read only. Can anyone lend a helping hand? :)
You could manipulate the data the recording process returns, this is generally the way to go for DSP.
A simple change in sound's speed can be done with a resampling.
Take a look here
I built an application that plays both uploaded original mp3 files, and copies that have been converted with FFMPEG. I am finding that in some cases the FFMPEG files have a horrible popping/clicking/screeching sound for a split second at startup (hear below). But when I analyze the file in an audio editor there is nothing there, so it seems to be either the browser or soundManager reacting badly to something in that file. Wondering if there is any way I can fix this either by adjusting FFMPEG settings, soundManager settings, or..... Any suggestions?
I've uploaded the offending sound in the link below (before the music starts playing).
Hear sound
Screeching/clicking at the beginning often points to a badly formed mp3 file. (There are a lot of them in the wild.)
Try checking both the original and converted files with mp3check -e -B
If you get errors, adding mp3check --cut-junk-start to your pipline should work.
This question already has answers here:
AVAudioPlayer - Drop in Framerate
(2 answers)
Closed 4 years ago.
I'm writing a game for iphone, and without background music it runs smoothly at 30 fps. But if I add music (using AudioQueues or AVAudioPlayer, both give similar effect), framerate periodically drops to 10 (about once per second), and then returns to 30. Music is mp3#128kps, 44kHz. It degrades performance not constantly, but at certain moments in time, which causes very jerky gameplay. Did anyone meet such problem? Is there any way to make cpu load for mp3 decoding/playing back more uniform? I'd rather have constanly 29 fps than 30 fps most of the time, and 10 once per second.
Maybe you could increase the priority of your rendering thread? Or take it into a different Run Loop (if that is how you're doing animation).
Couple of thoughts:
1) It might be work trying converting your files into another format and see if you still have this issue. I have had great success with using CAF files, just run afconvert over them.
afconvert -f caff -d ima4 <your mp3 file>
2) Also, is there any connection between the slowdown and track changes? I have often seen similar things when changing tracks (or restarting tracks). I think it has to do with the load it takes to read the file from 'disk'.
This is most probably caused by using the AmbientSound audio session category, see this question. You can easily solve the problem by switching to the SoloAmbientSound category.
You need to profile your app.
I'm getting 60fps regardless of bitrate.
It would be best if you showed us your code. Have you checked into the audio buffer options, it might not be optimal for what you need. Are you comfortable with audio Units?