I want to play audio by the URL
https://wortcast01.wortfm.org/appfiles/wort_210715_080006buzzthu.m3u
it has a body(with tracks)
https://wortcast01.wortfm.org/pitch/preroll-buzzthu.mp3
https://wortcast01.wortfm.org/mp3/wort_210715_080006buzzthu.mp3
When I set https://wortcast01.wortfm.org/appfiles/wort_210715_080006buzzthu.m3u to the AVPlayer then the duration is equal to Nan. But each track from the list has a duration.
Do you have an idea how to extract duration via AVPlayer?
return Nan:
var itemDuration: Double? {
return currentItem?.duration.seconds
}
AVFoundation(AVPlayer, AVAsset....) automatically marks any .m3u like "Live broadcasting" (my assumption: It looks like Apple uses scenario M3U8 for working with M3U)
Solution:
Create additional functionality which loads m3u(or pls) content, create an internal playlist, and play these internal parts ar AVPlayer.
Related
I've been playing around with Flutter and trying to get it so I can record the front facing camera (using the camera plugin [https://pub.dev/packages/camera]) as well as playing a video to the user (using the video_player plugin [https://pub.dev/packages/video_player]).
Next I use ffmpeg to horizontally stack the videos together. This all works fine but when I play back the final output there is a slight delay when listening to the audio. I'm calling Future.wait([cameraController.startVideoRecording(filePath), videoController.play()]); but there is a slight delay in these tasks actually starting. I don't even need them to fire at the exact same time (which I'm realising is probably impossible), instead if I knew exactly when each of the tasks begun then I can use the time difference to sync the audio using ffmpeg or similar.
I've tried adding a listener on the videoController to see when isPlaying first returns true, and also watching the output directory for when the recorded video appears on the filesystem:
listener = () {
if (videoController.value.isPlaying) {
isPlaying = DateTime.now().microsecondsSinceEpoch;
log('isPlaying '+isPlaying.toString());
}
videoController.removeListener(listener);
};
videoController.addListener(listener);
var watcher = DirectoryWatcher('${extDir.path}/');
watcher.events.listen((event) {
if (event.type == ChangeType.ADD) {
fileAdded = DateTime.now().microsecondsSinceEpoch;
log('added '+fileAdded.toString());
}
});
Then likewise for checking if the camera is recording:
var listener;
listener = () {
if (cameraController.value.isRecordingVideo) {
log('isRecordingVideo '+DateTime.now().microsecondsSinceEpoch.toString());
//cameraController.removeListener(listener);
}
};
cameraController.addListener(listener);
This results in (for example) the following order and microseconds for each event:
is playing: 1606478994797247
is recording: 1606478995492889 (695642 microseconds later)
added: 1606478995839676 (346787 microseconds later)
However, when I play back the video the syncing is off by approx 0.152 seconds so doesn't marry up to the time differences reported above.
Does anyone have any idea how I could accomplish near perfect syncing when combining 2 videos? Thanks.
I am using an AVPlayer to play video. I would like to be able to loop sections of the video based on the user's input (while the video is playing, the user can press a button to start a loop and then press it once more to end after a few seconds pass -- then it should begin playing at the start time and continue to loop once the current time reaches the specified end time)
I can get these start/end loop times just by getting the player's currentTime
var startLoop : CMTime = player.currentTime()
// seconds pass by ....
var endLoop : CMTime = player.currentTime()
I know there is a way to cleanly loop the video back to the beginning once it has finished playing like so:
NotificationCenter.default.addObserver(forName: .AVPlayerItemDidPlayToEndTime, object: self.player.currentItem, queue: .main) { [weak self] _ in
self?.player?.seek(to: CMTime.zero)
self?.player?.rate = self?.rate ?? 1.0
}
I was wondering if there is a way to do this with my custom startLoop and endLoop times?
There are a few ways to do looping in AVFoundation. The simple way is like you described by listening to the notification and then calling seek(to: startLoop). you can use addBoundaryTimeObserver to listen for specific time.
https://developer.apple.com/documentation/avfoundation/avplayer/1388027-addboundarytimeobserver
However, a more advanced way to do looping is to try to add a Key Value Observer to AVQueuePlayer and try inserting 2 copies of an AVPlayerItem with the specific range of the looping asset you want. Then you can use looping techniques such as the treadmill technique or AVPlayerLooper. The key to creating custom time ranged assets is to use an AVMutableComposition and insertTimeRange.
Also, if you really want to be experimental, you can try a 3rd option which is possibly using an AVMutableComposition as your original source asset and start mutating its underlying tracks on the fly while it is playing. I've tried something similar and it can work if you are not manipulating time ranges that are being played. However, there is no docs on this approach and Apple may change code in future versions that may break this.
References:
https://developer.apple.com/videos/play/wwdc2016/503/
https://developer.apple.com/library/archive/samplecode/avloopplayer/Introduction/Intro.html
https://developer.apple.com/library/archive/samplecode/AVCustomEdit/Introduction/Intro.html
Recommended way of updating timeRange property on AVPlayerLooper
#spitchay reach out to me if you have other AVFoundation questions.
I am developing a small audio sequencer application using AudioKit. I only need to play back 4 channels of audio. However I need to play them back perfectly synchronized down to the sample level. When I run a test using just two audio files, I can hear that they are not synchronized. The difference is only a few samples, but even a one sample discrepancy would be a problem. I am currently using multiple AKClipPlayer objects routed to an AKMixer object. I called him with the basics for loop like this:
private var clipPlayers : [AKClipPlayer] = []
func play(){
for player in clipPlayers{
player.play()
}
}
Is sample accurate playback timing of multiple audio files possible using AudioKit?
Yes, you need to schedule playback to start in the future with play(at:).
// This can take longer than expected, so do this before choosing a future time
clipPlayers.forEach { $0.prepare(withFrameCount: 10_000) }
let nearFuture = AVAudioTime.now() + 0.2
clipPlayers.forEach { $0.play(at: nearFuture) }
I'm using the MPMoviePlayer to play a movie. I want to be able to play the movie and allow the user to skip to a certain time in the film at the press of a button.
I'm setting the currentPlaybackTime property of the player but it doesn't seem to work. Instead it simply stats the movie from the beginning no matter what value I use.
Also I log the currentPlaybackTime property through a button click, it always returns a large number but it sometimes returns a minus value?! is this expected? (e.g. -227361412)
Sample code below:
- (IBAction) playShake
{
NSLog(#"playback time = %d",playerIdle.currentPlaybackTime);
[self.playerIdle setCurrentPlaybackTime:2.0];
return;
}
I have successfully used this method of skipping to a point in a movie in the past. I suspect that your issue is actually with the video itself. When you set the currentPlaybackTime MPMoviePlayer will skip to the nearest keyframe in the video. If the video has few keyframes or you're only skipping forward a few seconds this could cause the video to start over from the beginning when you change the currentPlaybackTime.
-currentPlaybackTime returns a NSTimeInterval (double) which you are printing as a signed int. This will result in unexpected values. Try either casting to int (int)playerIdle.currentPlaybackTime or printing the double %1.3f.
I'm wondering if there is a way to "listen" without recording and display the microphone's input levels?
Apples SpeakHere sample does the record and playback, and am wondering if there could a be a lighter version of just "listening" without actually recording and saving a file.
I use AudioQueues for this purpose. In your callback, get the input level like so:
AudioQueueLevelMeterState meter[NUM_INPUT_CHANNELS];
UInt32 dataSize = sizeof(meter);
AudioQueueGetProperty(aqInput, kAudioQueueProperty_CurrentLevelMeterDB, meter, &dataSize);
// input 'level' is in meter.mAveragePower
And simply don't write the audio into a file.