AVAssetExport fails "cannot decode" - swift

Hi I am facing an issue with AVAssetEXport. I am trying to combine many videos one after another, let's say 20 - 30, adding each asset as a track of VideoMutableCompositon with insertTimeFrame:
|video1|video2| ... |videoi| ... |videoN|
If I exceed 15 - 16 videos and I export,I get the following error:
failed: Error Domain=AVFoundationErrorDomain Code=-11839 "Cannot Decode" UserInfo={NSLocalizedDescription=Cannot Decode, NSUnderlyingError=0x1c8044620 {Error Domain=NSOSStatusErrorDomain Code=-12913 "(null)"}, NSLocalizedRecoverySuggestion=Stop any other actions that decode media and try again., NSLocalizedFailureReason=The decoder required for this media is busy.}
I am sure that this is not a RAM issue since I am using iphone8 plus and also xcode show a low RAM activity.
If I reduce the number of videos to 5 - 8 everything works great...
Is there a limit on adding video tracks?
Can anyone help me to achieve this goal?

Finally I found out the cause and was my bad!
The problem was that I was looping through AVMutableCompositionTrack.
So that each video was on a different track and probably that caused a lack of resources.
Right now I created only 2 tracks: 1 for video and 1 for audio.
All my videos are stitched with time ranges.
Thank you guys!!!!

Related

scipy.io.wavfile.read() fails to read ffmpeg-python's output

I followed the top answer in this StackOverflow post to use ffmpeg-python extract a .wav file from a YouTube URL (into the pcm_s16le codec), which was played successfully by my local audio player (Mac's Music).
However, as I tried to read it using scipy.io's wavefile,
samplerate, data = wavfile.read(wav_fname)
the following error message is thrown:
"WavFileWarning: Reached EOF prematurely; finished at 1192015 bytes, expected 4294967303 bytes from header."
May anyone suggest what's going on?
I have successfully extracted a .wav file which is successfully read by my local music player. However, it is failed to be recognized by scipy.io's wavefile. And I am not sure why.

Flutter's JustAudio plugin: createTrack returned error -12

Here's an error that occurs occasionally and intermittently when using Flutter's justaudio plugin with .m4a audio files;
createTrack returned error -12
E/AudioTrack(22346): createTrack_l(8194608): AudioFlinger could not create track, status: -12 output 0
E/AudioTrack-JNI(22346): Error -12 initializing AudioTrack
D/AudioTrack(22346): gather(): no metrics gathered, track status=-12
E/android.media.AudioTrack(22346): Error code -20 when initializing AudioTrack.
E/IAudioFlinger(22346): createTrack returned error -12
The file seems there and clicking play with forward the progress bar but I can't hear anything. If I close and restart Android Studio the problem goes away but it makes me worried about going live. How do I troubleshoot this?
On Android, audio players are resource intensive and there are a limited number of them that you can allocate before you get this error. You need to ensure that you dispose of a player once it's no longer being used, before you create new ones, otherwise you'll run out of resources.

Unity Issue an WebGL: FSBTool ERROR: Failed with error code 80004001

This is only with WebGL
Errors during import of AudioClip Assets/Audio/background.wav:
FSBTool ERROR: Failed with error code 80004001
FSBTool ERROR: Failed encoding audio clip '/Assets/Audio/background.wav' to AAC. Possibly the file is too short. Try to append silence such that the length becomes greater than 256 samples.
I can't add the audio click on WebGL, but on others platforms work perfectly.
I can't add the audio click
Please help.
The error tells you that the Audio-Sequence is too short. Try adding silence (as suggested) to increase the samples of your file.

What error does "ExtAudioFileWriteAsync -50" indicate?

I'm attempting to write an AAC file from the output stream of an AUGraph, and on playback my file only plays a buzzing noise, and I get the error ExtAudioFileWriteAsync -50.
I'd like to know what it means so that I can search for and destroy the problem.
Thanks to any Core Audio ninjas that can hook a brother up.
In case anyone else has this problem, the -50 error is a kAudio_ParamError error, defined in CoreAudioTypes.h.
Therefore, one of the parameters being passed to ExtAudioFileWriteAsync must be faulty.

x264 IDR access unit with a SPS and a PPS

I am trying to encode video in h.264 that when split with Apples HTTP Live Streaming tools media file segmenter will pass the media file validator I am getting two errors on the split MPEG-TS file
WARNING: Media segment contains a video track but does not contain any IDR access unit with a SPS and a PPS.
WARNING: 7 samples (17.073 %) do not have timestamps in track 257 (avc1).
After hours of research I think the "IDR" warning relates to not having keyframes in the right place on the segmented MPEG-TS file so in my ffmpeg command I set -keyint_min 1 to ensure keyframes where at every frame, but this didn't work.
Although it would be great to get an answer, if anyone can shed any light on what a "IDR access unit with a SPS and a PPS" is or what the timestamps warning means I would be very grateful, thanks.
Fix can be found on this thread https://devforums.apple.com/thread/45830?tstart=15