I need to create something like an infinite loop in my AVQueuePlayer. Especially, I want to replay the whole NSArray of AVPlayerItems once the last component finishes playing.
I must admit that I actually do not have any idea how to achieve this and hope you can give me some clues.
best way to loop a sequence of videos in AVQueuePlayer.
observe for each playeritem in AVQueuePlayer.
queuePlayer.actionAtItemEnd = AVPlayerActionAtItemEndNone;
for(AVPlayerItem *item in items) {
[[NSNotificationCenter defaultCenter] addObserver:self
selector:#selector(nextVideo:)
name:AVPlayerItemDidPlayToEndTimeNotification
object:item ];
}
on each nextVideo insert the currentItem again to queue it for playback. make sure to seek to zero for each item. after advanceToNextItem the AVQueuePlayer will remove the currentItem from queue.
-(void) nextVideo:(NSNotification*)notif {
AVPlayerItem *currItem = notif.userInfo[#"object"];
[currItem seekToTime:kCMTimeZero];
[queuePlayer advanceToNextItem];
[queuePlayer insertItem:currItem afterItem:nil];
}
This is it pretty much from scratch. The components are:
Create a queue that is an NSArray of AVPlayerItems.
As each item is added to the queue, set up a NSNotificationCenter observer to wake up when the video reaches the end.
In the observer's selector, tell the AVPlayerItem that you want to loop to go back the the beginning, then tell the player to play.
(NOTE: The AVPlayerDemoPlaybackView comes from the Apple "AVPlayerDemo" sample. Simply a subclass of UIView with a setter)
BOOL videoShouldLoop = YES;
NSFileManager *fileManager = [NSFileManager defaultManager];
NSMutableArray *videoQueue = [[NSMutableArray alloc] init];
AVQueuePlayer *mPlayer;
AVPlayerDemoPlaybackView *mPlaybackView;
// You'll need to get an array of the files you want to queue as NSARrray *fileList:
for (NSString *videoPath in fileList) {
// Add all files to the queue as AVPlayerItems
if ([fileManager fileExistsAtPath: videoPath]) {
NSURL *videoURL = [NSURL fileURLWithPath: videoPath];
AVPlayerItem *playerItem = [AVPlayerItem playerItemWithURL: videoURL];
// Setup the observer
[[NSNotificationCenter defaultCenter] addObserver: self
selector: #selector(playerItemDidReachEnd:)
name: AVPlayerItemDidPlayToEndTimeNotification
object: playerItem];
// Add the playerItem to the queue
[videoQueue addObject: playerItem];
}
}
// Add the queue array to the AVQueuePlayer
mPlayer = [AVQueuePlayer queuePlayerWithItems: videoQueue];
// Add the player to the view
[mPlaybackView setPlayer: mPlayer];
// If you should only have one video, this allows it to stop at the end instead of blanking the display
if ([[mPlayer items] count] == 1) {
mPlayer.actionAtItemEnd = AVPlayerActionAtItemEndNone;
}
// Start playing
[mPlayer play];
- (void) playerItemDidReachEnd: (NSNotification *)notification
{
// Loop the video
if (videoShouldLoop) {
// Get the current item
AVPlayerItem *playerItem = [mPlayer currentItem];
// Set it back to the beginning
[playerItem seekToTime: kCMTimeZero];
// Tell the player to do nothing when it reaches the end of the video
// -- It will come back to this method when it's done
mPlayer.actionAtItemEnd = AVPlayerActionAtItemEndNone;
// Play it again, Sam
[mPlayer play];
} else {
mPlayer.actionAtItemEnd = AVPlayerActionAtItemEndAdvance;
}
}
That's it! Let me know of something needs further explanation.
I figured out a solution to loop over all the videos in my video queue, not just one. First I initialized my AVQueuePlayer:
- (void)viewDidLoad
{
NSMutableArray *vidItems = [[NSMutableArray alloc] init];
for (int i = 0; i < 5; i++)
{
// create file name and make path
NSString *fileName = [NSString stringWithFormat:#"intro%i", i];
NSString *path = [[NSBundle mainBundle] pathForResource:fileName ofType:#"mov"];
NSURL *movieUrl = [NSURL fileURLWithPath:path];
// load url as player item
AVPlayerItem *item = [AVPlayerItem playerItemWithURL:movieUrl];
// observe when this item ends
[[NSNotificationCenter defaultCenter] addObserver:self
selector:#selector(playerItemDidReachEnd:)
name:AVPlayerItemDidPlayToEndTimeNotification
object:item];
// add to array
[vidItems addObject:item];
}
// initialize avqueueplayer
_moviePlayer = [AVQueuePlayer queuePlayerWithItems:vidItems];
_moviePlayer.actionAtItemEnd = AVPlayerActionAtItemEndNone;
// create layer for viewing
AVPlayerLayer *layer = [AVPlayerLayer playerLayerWithPlayer:_moviePlayer];
layer.frame = self.view.bounds;
layer.videoGravity = AVLayerVideoGravityResizeAspectFill;
// add layer to uiview container
[_movieViewContainer.layer addSublayer:layer];
}
When the notification is posted
- (void)playerItemDidReachEnd:(NSNotification *)notification {
AVPlayerItem *p = [notification object];
// keep playing the queue
[_moviePlayer advanceToNextItem];
// if this is the last item in the queue, add the videos back in
if (_moviePlayer.items.count == 1)
{
// it'd be more efficient to make this a method being we're using it a second time
for (int i = 0; i < 5; i++)
{
NSString *fileName = [NSString stringWithFormat:#"intro%i", i];
NSString *path = [[NSBundle mainBundle] pathForResource:fileName ofType:#"mov"];
NSURL *movieUrl = [NSURL fileURLWithPath:path];
AVPlayerItem *item = [AVPlayerItem playerItemWithURL:movieUrl];
[[NSNotificationCenter defaultCenter] addObserver:self
selector:#selector(playerItemDidReachEnd:)
name:AVPlayerItemDidPlayToEndTimeNotification
object:item];
// the difference from last time, we're adding the new item after the last item in our player to maintain the order
[_moviePlayer insertItem:item afterItem:[[_moviePlayer items] lastObject]];
}
}
}
In Swift 4, you can use something like the following:
NotificationCenter.default.addObserver(forName: NSNotification.Name.AVPlayerItemDidPlayToEndTime, object: nil, queue: nil) {
notification in
print("A PlayerItem just finished playing !")
currentVideo += 1
if(currentVideo >= videoPaths.count) {
currentVideo = 0
}
let url = URL(fileURLWithPath:videoPaths[currentVideo])
let playerItem = AVPlayerItem.init(url: url)
player.insert(playerItem, after: nil)
}
From what I understand the AVQueuePlayer will remove the last item played, so you need to keep adding the video just played to the queue, otherwise it will actually run out of videos to play! You can't just .seek to 0 and .play() again, that doesn't work.
working on iOS 14 as of March 2021
Just wanted to post my solution as a new coder this took me a while to figure out.
my solution loops a avqueuplayer inside of a UIView without using AVLooper. The idea came from raywenderlich. You can ignore the UIView wrapper ect. The idea is to use the NSKeyValeObservation and not the notification center as everyone else suggests.
class QueuePlayerUIView: UIView {
private var playerLayer = AVPlayerLayer()
private var playerLooper: AVPlayerLooper?
#objc let queuePlayer = AVQueuePlayer()
var token: NSKeyValueObservation?
var clips = [URL]()
func addAllVideosToPlayer() {
for video in clips {
let asset = AVURLAsset(url: video)
let item = AVPlayerItem(asset: asset)
queuePlayer.insert(item, after: queuePlayer.items().last)
}
}
init(videosArr: [URL]) {
super.init(frame: .zero)
self.clips = videosArr
addAllVideosToPlayer()
playerLayer.player = queuePlayer
playerLayer.videoGravity = .resizeAspectFill
layer.addSublayer(playerLayer)
queuePlayer.play()
token = queuePlayer.observe(\.currentItem) { [weak self] player, _ in
if player.items().count == 1 {
self?.addAllVideosToPlayer()
}
}
}
override func layoutSubviews() {
super.layoutSubviews()
playerLayer.frame = bounds
}
required init?(coder: NSCoder) {
fatalError("init(coder:) has not been implemented")
}
}
Related
I'm trying to implement an embedded Youtube video in my application.
The only way I've managed to do this is with frameworks like LBYouTubeView or HCYotubeParser. From what I've read they are against the Youtube TOS because they basically strip the video link from the http getting something like this.
This indeed can be played inside a MPMoviePlayerController without any problems (and without leaving the application).
From what I've search there are two applications that managed to do this Vodio and Frequency so I'm wondering if there is a special sdk for developers or an affiliate program I might have missed.
From what I also read Youtube Engineers respond to this questions if they are tagged with youtube-api so I hope you could clear this up.
I've read about the UIWebView implementations but that scenario actually opens the an external video player which I have no control over.
I've looked around for a while and this is what I came up with:
It involves adding a youtube.html file to your project with the following code:
<html>
<head><style>body{margin:0px 0px 0px 0px;}</style></head>
<body>
<!-- 1. The <iframe> (and video player) will replace this <div> tag. -->
<div id="player"></div>
<script>
// 2. This code loads the IFrame Player API code asynchronously.
var tag = document.createElement('script');
tag.src = "http://www.youtube.com/player_api";
var firstScriptTag = document.getElementsByTagName('script')[0];
firstScriptTag.parentNode.insertBefore(tag, firstScriptTag);
// 3. This function creates an <iframe> (and YouTube player)
// after the API code downloads.
var player;
function onYouTubePlayerAPIReady()
{
player = new YT.Player('player',
{
width: '640',
height: '360',
videoId: '%#',
playerVars: {'autoplay' : 1, 'controls' : 0 , 'vq' : 'hd720', 'playsinline' : 1, 'showinfo' : 0, 'rel' : 0, 'enablejsapi' : 1, 'modestbranding' : 1},
events:
{
'onReady': onPlayerReady,
'onStateChange': onPlayerStateChange
}
});
}
// 4. The API will call this function when the video player is ready.
function onPlayerReady(event)
{
event.target.playVideo();
}
// 5. The API calls this function when the player's state changes.
// The function indicates that when playing a video (state=1),
// the player should play for six seconds and then stop.
var done = false;
function onPlayerStateChange(event)
{
if (event.data == YT.PlayerState.ENDED)
{
window.location = "callback:anything";
};
}
function stopVideo()
{
player.stopVideo();
window.location = "callback:anything";
}
function getTime()
{
return player.getCurrentTime();
}
function getDuration()
{
return player.getDuration();
}
function pause()
{
player.pauseVideo();
}
function play()
{
player.playVideo();
}
</script>
</body>
</html>
Also you have to create a UiWebView subclass that is also a delegate for UIWebViewDelegate
- (id)initWithFrame:(CGRect)frame
{
self = [super initWithFrame:frame];
if (self == nil) return nil;
self.mediaPlaybackRequiresUserAction = NO;
self.delegate = self;
self.allowsInlineMediaPlayback = TRUE;
self.userInteractionEnabled = FALSE;
return self;
}
- (void)loadVideo:(NSString *)videoId
{
NSString *filePath = [[NSBundle mainBundle] pathForResource:#"YouTube" ofType:#"html"];
// if (filePath == nil)
NSError *error;
NSString *string = [NSString stringWithContentsOfFile:filePath encoding:NSUTF8StringEncoding error:&error];
// TODO: error check
string = [NSString stringWithFormat:string, videoId];
NSData *htmlData = [string dataUsingEncoding:NSUTF8StringEncoding];
// if (htmlData == nil)
NSString *documentsDirectoryPath = [NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES) objectAtIndex:0];
NSString *targetPath = [documentsDirectoryPath stringByAppendingPathComponent:#"youtube2.html"];
[htmlData writeToFile:targetPath atomically:YES];
[self loadRequest:[NSURLRequest requestWithURL:[NSURL fileURLWithPath:targetPath]]];
File = 0;
}
- (BOOL)webView:(UIWebView *)webView shouldStartLoadWithRequest:(NSURLRequest *)request navigationType:(UIWebViewNavigationType)navigationType
{
if ([[[request URL] scheme] isEqualToString:#"callback"])
{
Playing = FALSE;
NSString *documentsDirectoryPath = [NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES) objectAtIndex:0];
NSString *targetPath = [documentsDirectoryPath stringByAppendingPathComponent:#"youtube2.html"];
NSError *error;
[[NSFileManager defaultManager] removeItemAtPath:targetPath error:&error];
}
return YES;
}
Basically it creates a UIWebView and loads the code in the youtube.html file. Because the youtube.html is static and I need to load a certain id I create a copy on the fly in the documents folder youtube2.html in which I add the string id.
The whole thing is that [self loadRequest:[NSURLRequest requestWithURL:[NSURL fileURLWithPath:targetPath]]]; works but loading a NSUrlRequest from a string doesn't.
The Javascript functions you see in the html file are for controlling the video. If you need access to time or full duration you can get them like this:
float VideoDuration = [[YT_WebView stringByEvaluatingJavaScriptFromString:#"getDuration()"] floatValue];
float VideoTime = [[YT_WebView stringByEvaluatingJavaScriptFromString:#"getTime()"] floatValue];
I currently have an iPhone app that lets the users take video, upload it to the server, and allows others to view their video from the app. Never had an issue with the video's orientation until I went to make a web site to view the different videos (along with other content).
I consume the video's from the web service, and load them with ajax using videojs, but every single one of them is rotated left 90 degrees. From what I've read, it sounds like orientation information is able to be read in iOS, but not on a website. Is there any way to save a new orientation for the video on the iphone, before sending to the server?
//change Orientation Of video
//in videoorientation.h
#import <UIKit/UIKit.h>
#import <AVFoundation/AVFoundation.h>
#interface videoorientationViewController : UIViewController
#property AVMutableComposition *mutableComposition;
#property AVMutableVideoComposition *mutableVideoComposition;
#property AVMutableAudioMix *mutableAudioMix;
#property AVAssetExportSession *exportSession;
- (void)performWithAsset : (NSURL *)moviename;
#end
In //viewcontroller.m
- (void)performWithAsset : (NSURL *)moviename
{
self.mutableComposition=nil;
self.mutableVideoComposition=nil;
self.mutableAudioMix=nil;
// NSString* filename = [NSString stringWithFormat:#"temp1.mov"];
//
// NSLog(#"file name== %#",filename);
//
// [[NSUserDefaults standardUserDefaults]setObject:filename forKey:#"currentName"];
// NSString* path = [NSTemporaryDirectory() stringByAppendingPathComponent:filename];
// NSLog(#"file number %i",_currentFile);
// NSURL* url = [NSURL fileURLWithPath:path];
// NSString *videoURL = [[NSBundle mainBundle] pathForResource:#"Movie" ofType:#"m4v"];
AVAsset *asset = [[AVURLAsset alloc] initWithURL:moviename options:nil];
AVMutableVideoCompositionInstruction *instruction = nil;
AVMutableVideoCompositionLayerInstruction *layerInstruction = nil;
CGAffineTransform t1;
CGAffineTransform t2;
AVAssetTrack *assetVideoTrack = nil;
AVAssetTrack *assetAudioTrack = nil;
// Check if the asset contains video and audio tracks
if ([[asset tracksWithMediaType:AVMediaTypeVideo] count] != 0) {
assetVideoTrack = [asset tracksWithMediaType:AVMediaTypeVideo][0];
}
if ([[asset tracksWithMediaType:AVMediaTypeAudio] count] != 0) {
assetAudioTrack = [asset tracksWithMediaType:AVMediaTypeAudio][0];
}
CMTime insertionPoint = kCMTimeZero;
NSError *error = nil;
// Step 1
// Create a composition with the given asset and insert audio and video tracks into it from the asset
if (!self.mutableComposition) {
// Check whether a composition has already been created, i.e, some other tool has already been applied
// Create a new composition
self.mutableComposition = [AVMutableComposition composition];
// Insert the video and audio tracks from AVAsset
if (assetVideoTrack != nil) {
AVMutableCompositionTrack *compositionVideoTrack = [self.mutableComposition addMutableTrackWithMediaType:AVMediaTypeVideo preferredTrackID:kCMPersistentTrackID_Invalid];
[compositionVideoTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, [asset duration]) ofTrack:assetVideoTrack atTime:insertionPoint error:&error];
}
if (assetAudioTrack != nil) {
AVMutableCompositionTrack *compositionAudioTrack = [self.mutableComposition addMutableTrackWithMediaType:AVMediaTypeAudio preferredTrackID:kCMPersistentTrackID_Invalid];
[compositionAudioTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, [asset duration]) ofTrack:assetAudioTrack atTime:insertionPoint error:&error];
}
}
// Step 2
// Translate the composition to compensate the movement caused by rotation (since rotation would cause it to move out of frame)
t1 = CGAffineTransformMakeTranslation(assetVideoTrack.naturalSize.height, 0.0);
float width=assetVideoTrack.naturalSize.width;
float height=assetVideoTrack.naturalSize.height;
float toDiagonal=sqrt(width*width+height*height);
float toDiagonalAngle = radiansToDegrees(acosf(width/toDiagonal));
float toDiagonalAngle2=90-radiansToDegrees(acosf(width/toDiagonal));
float toDiagonalAngleComple;
float toDiagonalAngleComple2;
float finalHeight = 0.0;
float finalWidth = 0.0;
float degrees=90;
if(degrees>=0&°rees<=90){
toDiagonalAngleComple=toDiagonalAngle+degrees;
toDiagonalAngleComple2=toDiagonalAngle2+degrees;
finalHeight=ABS(toDiagonal*sinf(degreesToRadians(toDiagonalAngleComple)));
finalWidth=ABS(toDiagonal*sinf(degreesToRadians(toDiagonalAngleComple2)));
t1 = CGAffineTransformMakeTranslation(height*sinf(degreesToRadians(degrees)), 0.0);
}
else if(degrees>90&°rees<=180){
float degrees2 = degrees-90;
toDiagonalAngleComple=toDiagonalAngle+degrees2;
toDiagonalAngleComple2=toDiagonalAngle2+degrees2;
finalHeight=ABS(toDiagonal*sinf(degreesToRadians(toDiagonalAngleComple2)));
finalWidth=ABS(toDiagonal*sinf(degreesToRadians(toDiagonalAngleComple)));
t1 = CGAffineTransformMakeTranslation(width*sinf(degreesToRadians(degrees2))+height*cosf(degreesToRadians(degrees2)), height*sinf(degreesToRadians(degrees2)));
}
else if(degrees>=-90&°rees<0){
float degrees2 = degrees-90;
float degreesabs = ABS(degrees);
toDiagonalAngleComple=toDiagonalAngle+degrees2;
toDiagonalAngleComple2=toDiagonalAngle2+degrees2;
finalHeight=ABS(toDiagonal*sinf(degreesToRadians(toDiagonalAngleComple2)));
finalWidth=ABS(toDiagonal*sinf(degreesToRadians(toDiagonalAngleComple)));
t1 = CGAffineTransformMakeTranslation(0, width*sinf(degreesToRadians(degreesabs)));
}
else if(degrees>=-180&°rees<-90){
float degreesabs = ABS(degrees);
float degreesplus = degreesabs-90;
toDiagonalAngleComple=toDiagonalAngle+degrees;
toDiagonalAngleComple2=toDiagonalAngle2+degrees;
finalHeight=ABS(toDiagonal*sinf(degreesToRadians(toDiagonalAngleComple)));
finalWidth=ABS(toDiagonal*sinf(degreesToRadians(toDiagonalAngleComple2)));
t1 = CGAffineTransformMakeTranslation(width*sinf(degreesToRadians(degreesplus)), height*sinf(degreesToRadians(degreesplus))+width*cosf(degreesToRadians(degreesplus)));
}
// Rotate transformation
t2 = CGAffineTransformRotate(t1, degreesToRadians(degrees));
//t2 = CGAffineTransformRotate(t1, -90);
// Step 3
// Set the appropriate render sizes and rotational transforms
if (!self.mutableVideoComposition) {
// Create a new video composition
self.mutableVideoComposition = [AVMutableVideoComposition videoComposition];
// self.mutableVideoComposition.renderSize = CGSizeMake(assetVideoTrack.naturalSize.height,assetVideoTrack.naturalSize.width);
self.mutableVideoComposition.renderSize = CGSizeMake(finalWidth,finalHeight);
self.mutableVideoComposition.frameDuration = CMTimeMake(1,30);
// The rotate transform is set on a layer instruction
instruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
instruction.timeRange = CMTimeRangeMake(kCMTimeZero, [self.mutableComposition duration]);
layerInstruction = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:(self.mutableComposition.tracks)[0]];
[layerInstruction setTransform:t2 atTime:kCMTimeZero];
} else {
self.mutableVideoComposition.renderSize = CGSizeMake(self.mutableVideoComposition.renderSize.height, self.mutableVideoComposition.renderSize.width);
// Extract the existing layer instruction on the mutableVideoComposition
instruction = (self.mutableVideoComposition.instructions)[0];
layerInstruction = (instruction.layerInstructions)[0];
// Check if a transform already exists on this layer instruction, this is done to add the current transform on top of previous edits
CGAffineTransform existingTransform;
if (![layerInstruction getTransformRampForTime:[self.mutableComposition duration] startTransform:&existingTransform endTransform:NULL timeRange:NULL]) {
[layerInstruction setTransform:t2 atTime:kCMTimeZero];
} else {
// Note: the point of origin for rotation is the upper left corner of the composition, t3 is to compensate for origin
CGAffineTransform t3 = CGAffineTransformMakeTranslation(-1*assetVideoTrack.naturalSize.height/2, 0.0);
CGAffineTransform newTransform = CGAffineTransformConcat(existingTransform, CGAffineTransformConcat(t2, t3));
[layerInstruction setTransform:newTransform atTime:kCMTimeZero];
}
}
// Step 4
// Add the transform instructions to the video composition
instruction.layerInstructions = #[layerInstruction];
self.mutableVideoComposition.instructions = #[instruction];
// Step 5
// Notify AVSEViewController about rotation operation completion
// [[NSNotificationCenter defaultCenter] postNotificationName:AVSEEditCommandCompletionNotification object:self];
[self performWithAssetExport];
}
- (void)performWithAssetExport
{
// Step 1
// Create an outputURL to which the exported movie will be saved
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *outputURL = paths[0];
NSFileManager *manager = [NSFileManager defaultManager];
[manager createDirectoryAtPath:outputURL withIntermediateDirectories:YES attributes:nil error:nil];
outputURL = [outputURL stringByAppendingPathComponent:#"output.mov"];
// Remove Existing File
[manager removeItemAtPath:outputURL error:nil];
// Step 2
// Create an export session with the composition and write the exported movie to the photo library
self.exportSession = [[AVAssetExportSession alloc] initWithAsset:[self.mutableComposition copy] presetName:AVAssetExportPreset1280x720];
self.exportSession.videoComposition = self.mutableVideoComposition;
self.exportSession.audioMix = self.mutableAudioMix;
self.exportSession.outputURL = [NSURL fileURLWithPath:outputURL];
self.exportSession.outputFileType=AVFileTypeQuickTimeMovie;
[self.exportSession exportAsynchronouslyWithCompletionHandler:^(void){
switch (self.exportSession.status) {
case AVAssetExportSessionStatusCompleted:
//[self playfunction];
[[NSNotificationCenter defaultCenter]postNotificationName:#"Backhome" object:nil];
// Step 3
// Notify AVSEViewController about export completion
break;
case AVAssetExportSessionStatusFailed:
NSLog(#"Failed:%#",self.exportSession.error);
break;
case AVAssetExportSessionStatusCancelled:
NSLog(#"Canceled:%#",self.exportSession.error);
break;
default:
break;
}
}];
}
It appears that this issue occurs because videojs has trouble reading the orientation. information here: http://help.videojs.com/discussions/problems/1508-video-orientation-for-iphone-wrong
Based on the implied solution you should be checking to make sure when you save the video you are using AVFramework to set the orientation value. Information on how to do that is available in this previous stack overflow post: How do I set the orientation for a frame-by-frame-generated video using AVFoundation?
I'm finding a number of conflicting data about playing sounds in iOS. What is a recommended way to play just a simple "ping" sound bite every time the user touches the screen?
I use this:
Header file:
#import <AudioToolbox/AudioServices.h>
#interface SoundEffect : NSObject
{
SystemSoundID soundID;
}
- (id)initWithSoundNamed:(NSString *)filename;
- (void)play;
#end
Source file:
#import "SoundEffect.h"
#implementation SoundEffect
- (id)initWithSoundNamed:(NSString *)filename
{
if ((self = [super init]))
{
NSURL *fileURL = [[NSBundle mainBundle] URLForResource:filename withExtension:nil];
if (fileURL != nil)
{
SystemSoundID theSoundID;
OSStatus error = AudioServicesCreateSystemSoundID((__bridge CFURLRef)fileURL, &theSoundID);
if (error == kAudioServicesNoError)
soundID = theSoundID;
}
}
return self;
}
- (void)dealloc
{
AudioServicesDisposeSystemSoundID(soundID);
}
- (void)play
{
AudioServicesPlaySystemSound(soundID);
}
#end
You will need to create an instance of SoundEffect and direct call the method play on it.
This is the best way of playing a simple sound in iOS (no more than 30 seconds):
//Retrieve audio file
NSString *path = [[NSBundle mainBundle] pathForResource:#"soundeffect" ofType:#"m4a"];
NSURL *pathURL = [NSURL fileURLWithPath : path];
SystemSoundID audioEffect;
AudioServicesCreateSystemSoundID((__bridge CFURLRef) pathURL, &audioEffect);
AudioServicesPlaySystemSound(audioEffect);
// call the following function when the sound is no longer used
// (must be done AFTER the sound is done playing)
AudioServicesDisposeSystemSoundID(audioEffect);
(Small amendment to the correct answer to take care of the disposing of the audio)
NSString *path = [[NSBundle mainBundle] pathForResource:#"soundeffect" ofType:#"m4a"];
NSURL *pathURL = [NSURL fileURLWithPath : path];
SystemSoundID audioEffect;
AudioServicesCreateSystemSoundID((__bridge CFURLRef) pathURL, &audioEffect);
AudioServicesPlaySystemSound(audioEffect);
// Using GCD, we can use a block to dispose of the audio effect without using a NSTimer or something else to figure out when it'll be finished playing.
dispatch_after(dispatch_time(DISPATCH_TIME_NOW, (int64_t)(30 * NSEC_PER_SEC)), dispatch_get_main_queue(), ^{
AudioServicesDisposeSystemSoundID(audioEffect);
});
You can use AVFoundation or AudioToolbox.
Here're two examples which use the libraries separately.
SAMSoundEffect (archived repository)
BRYSoundEffectPlayer
Here's an updated answer for Swift (4):
import AudioToolbox
func playSound() {
var soundId: SystemSoundID = 0
guard let soundPath = Bundle.main.url(forResource: "Success7", withExtension: "wav") else {
print("Error finding file")
return
}
let error = AudioServicesCreateSystemSoundID(soundPath as CFURL, &soundId)
if error != kAudioServicesNoError {
print("Error loading sound")
return
}
AudioServicesPlaySystemSoundWithCompletion(soundId) {
AudioServicesDisposeSystemSoundID(soundId)
}
}
If you have a sound effect that you want to play multiple times in a view, then you can be a little more smart about loading and disposing of the audio:
class YourViewController: UIViewController {
fileprivate lazy var soundId: SystemSoundID? = {
guard let soundPath = Bundle.main.url(forResource: "Success7", withExtension: "wav") else {
return nil
}
var soundId: SystemSoundID = 0
let error = AudioServicesCreateSystemSoundID(soundPath as CFURL, &soundId)
if error != kAudioServicesNoError {
return nil
}
return soundId
}()
func playScannedSound() {
guard let soundId = self.soundId else {
return
}
AudioServicesPlaySystemSoundWithCompletion(soundId, nil)
}
deinit {
guard let soundId = self.soundId else {
return
}
AudioServicesDisposeSystemSoundID(soundId)
}
}
I'm embedded in an environment (Adobe AIR) where I cannot override didFinishLaunchingWithOptions. Is there any other way to get those options? Are they stored in some global variable somewhere? Or does anyone know how to get those options in AIR?
I need this for Apple Push Notification Service (APNS).
Following the path in the link Michiel left ( http://www.tinytimgames.com/2011/09/01/unity-plugins-and-uiapplicationdidfinishlaunchingnotifcation/ ), you can create a class who's init method adds an observer to the UIApplicationDidFinishLaunchingNotification key. When the observer method is executed, the launchOptions will be contained in the notification's userInfo. I was doing this with local notifications so this was the implementation of my class:
static BOOL _launchedWithNotification = NO;
static UILocalNotification *_localNotification = nil;
#implementation NotificationChecker
+ (void)load
{
[[NSNotificationCenter defaultCenter] addObserver:self selector:#selector(createNotificationChecker:)
name:#"UIApplicationDidFinishLaunchingNotification" object:nil];
}
+ (void)createNotificationChecker:(NSNotification *)notification
{
NSDictionary *launchOptions = [notification userInfo] ;
// This code will be called immediately after application:didFinishLaunchingWithOptions:.
UILocalNotification *localNotification = [launchOptions objectForKey: #"UIApplicationLaunchOptionsLocalNotificationKey"];
if (localNotification)
{
_launchedWithNotification = YES;
_localNotification = localNotification;
}
else
{
_launchedWithNotification = NO;
}
}
+(BOOL) applicationWasLaunchedWithNotification
{
return _launchedWithNotification;
}
+(UILocalNotification*) getLocalNotification
{
return _localNotification;
}
#end
Then when my extension context is initialized I check the NotificationChecker class to see if the application was launched with a notification.
BOOL appLaunchedWithNotification = [NotificationChecker applicationWasLaunchedWithNotification];
if(appLaunchedWithNotification)
{
[UIApplication sharedApplication].applicationIconBadgeNumber = 0;
UILocalNotification *notification = [NotificationChecker getLocalNotification];
NSString *type = [notification.userInfo objectForKey:#"type"];
FREDispatchStatusEventAsync(context, (uint8_t*)[#"notificationSelected" UTF8String], (uint8_t*)[type UTF8String]);
}
Hope that helps someone!
First time asking a question here. I'm hoping the post is clear and sample code is formatted correctly.
I'm experimenting with AVFoundation and time lapse photography.
My intent is to grab every Nth frame from the video camera of an iOS device (my iPod touch, version 4) and write each of those frames out to a file to create a timelapse. I'm using AVCaptureVideoDataOutput, AVAssetWriter and AVAssetWriterInput.
The problem is, if I use the CMSampleBufferRef passed to captureOutput:idOutputSampleBuffer:fromConnection:, the playback of each frame is the length of time between original input frames. A frame rate of say 1fps. I'm looking to get 30fps.
I've tried using CMSampleBufferCreateCopyWithNewTiming(), but then after 13 frames are written to the file, the captureOutput:idOutputSampleBuffer:fromConnection: stops being called. The interface is active and I can tap a button to stop the capture and save it to the photo library for playback. It appears to play back as I want it, 30fps, but it only has those 13 frames.
How can I accomplish my goal of 30fps playback?
How can I tell where the app is getting lost and why?
I've placed a flag called useNativeTime so I can test both cases. When set to YES, I get all frames I'm interested in as the callback doesn't 'get lost'. When I set that flag to NO, I only ever get 13 frames processed and am never returned to that method again. As mentioned above, in both cases I can playback the video.
Thanks for any help.
Here is where I'm trying to do the retiming.
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
BOOL useNativeTime = NO;
BOOL appendSuccessFlag = NO;
//NSLog(#"in captureOutpput sample buffer method");
if( !CMSampleBufferDataIsReady(sampleBuffer) )
{
NSLog( #"sample buffer is not ready. Skipping sample" );
//CMSampleBufferInvalidate(sampleBuffer);
return;
}
if (! [inputWriterBuffer isReadyForMoreMediaData])
{
NSLog(#"Not ready for data.");
}
else {
// Write every first frame of n frames (30 native from camera).
intervalFrames++;
if (intervalFrames > 30) {
intervalFrames = 1;
}
else if (intervalFrames != 1) {
//CMSampleBufferInvalidate(sampleBuffer);
return;
}
// Need to initialize start session time.
if (writtenFrames < 1) {
if (useNativeTime) imageSourceTime = CMSampleBufferGetPresentationTimeStamp(sampleBuffer);
else imageSourceTime = CMTimeMake( 0 * 20 ,600); //CMTimeMake(1,30);
[outputWriter startSessionAtSourceTime: imageSourceTime];
NSLog(#"Starting CMtime");
CMTimeShow(imageSourceTime);
}
if (useNativeTime) {
imageSourceTime = CMSampleBufferGetPresentationTimeStamp(sampleBuffer);
CMTimeShow(imageSourceTime);
// CMTime myTiming = CMTimeMake(writtenFrames * 20,600);
// CMSampleBufferSetOutputPresentationTimeStamp(sampleBuffer, myTiming); // Tried but has no affect.
appendSuccessFlag = [inputWriterBuffer appendSampleBuffer:sampleBuffer];
}
else {
CMSampleBufferRef newSampleBuffer;
CMSampleTimingInfo sampleTimingInfo;
sampleTimingInfo.duration = CMTimeMake(20,600);
sampleTimingInfo.presentationTimeStamp = CMTimeMake( (writtenFrames + 0) * 20,600);
sampleTimingInfo.decodeTimeStamp = kCMTimeInvalid;
OSStatus myStatus;
//NSLog(#"numSamples of sampleBuffer: %i", CMSampleBufferGetNumSamples(sampleBuffer) );
myStatus = CMSampleBufferCreateCopyWithNewTiming(kCFAllocatorDefault,
sampleBuffer,
1,
&sampleTimingInfo, // maybe a little confused on this param.
&newSampleBuffer);
// These confirm the good heath of our newSampleBuffer.
if (myStatus != 0) NSLog(#"CMSampleBufferCreateCopyWithNewTiming() myStatus: %i",myStatus);
if (! CMSampleBufferIsValid(newSampleBuffer)) NSLog(#"CMSampleBufferIsValid NOT!");
// No affect.
//myStatus = CMSampleBufferMakeDataReady(newSampleBuffer); // How is this different; CMSampleBufferSetDataReady ?
//if (myStatus != 0) NSLog(#"CMSampleBufferMakeDataReady() myStatus: %i",myStatus);
imageSourceTime = CMSampleBufferGetPresentationTimeStamp(newSampleBuffer);
CMTimeShow(imageSourceTime);
appendSuccessFlag = [inputWriterBuffer appendSampleBuffer:newSampleBuffer];
//CMSampleBufferInvalidate(sampleBuffer); // Docs don't describe action. WTF does it do? Doesn't seem to affect my problem. Used with CMSampleBufferSetInvalidateCallback maybe?
//CFRelease(sampleBuffer); // - Not surprisingly - “EXC_BAD_ACCESS”
}
if (!appendSuccessFlag)
{
NSLog(#"Failed to append pixel buffer");
}
else {
writtenFrames++;
NSLog(#"writtenFrames: %i", writtenFrames);
}
}
//[self displayOuptutWritterStatus]; // Expect and see AVAssetWriterStatusWriting.
}
My setup routine.
- (IBAction) recordingStartStop: (id) sender
{
NSError * error;
if (self.isRecording) {
NSLog(#"~~~~~~~~~ STOPPING RECORDING ~~~~~~~~~");
self.isRecording = NO;
[recordingStarStop setTitle: #"Record" forState: UIControlStateNormal];
//[self.captureSession stopRunning];
[inputWriterBuffer markAsFinished];
[outputWriter endSessionAtSourceTime:imageSourceTime];
[outputWriter finishWriting]; // Blocks until file is completely written, or an error occurs.
NSLog(#"finished CMtime");
CMTimeShow(imageSourceTime);
// Really, I should loop through the outputs and close all of them or target specific ones.
// Since I'm only recording video right now, I feel safe doing this.
[self.captureSession removeOutput: [[self.captureSession outputs] objectAtIndex: 0]];
[videoOutput release];
[inputWriterBuffer release];
[outputWriter release];
videoOutput = nil;
inputWriterBuffer = nil;
outputWriter = nil;
NSLog(#"~~~~~~~~~ STOPPED RECORDING ~~~~~~~~~");
NSLog(#"Calling UIVideoAtPathIsCompatibleWithSavedPhotosAlbum.");
NSLog(#"filePath: %#", [projectPaths movieFilePath]);
if (UIVideoAtPathIsCompatibleWithSavedPhotosAlbum([projectPaths movieFilePath])) {
NSLog(#"Calling UISaveVideoAtPathToSavedPhotosAlbum.");
UISaveVideoAtPathToSavedPhotosAlbum ([projectPaths movieFilePath], self, #selector(video:didFinishSavingWithError: contextInfo:), nil);
}
NSLog(#"~~~~~~~~~ WROTE RECORDING to PhotosAlbum ~~~~~~~~~");
}
else {
NSLog(#"~~~~~~~~~ STARTING RECORDING ~~~~~~~~~");
projectPaths = [[ProjectPaths alloc] initWithProjectFolder: #"TestProject"];
intervalFrames = 30;
videoOutput = [[AVCaptureVideoDataOutput alloc] init];
NSMutableDictionary * cameraVideoSettings = [[[NSMutableDictionary alloc] init] autorelease];
NSString* key = (NSString*)kCVPixelBufferPixelFormatTypeKey;
NSNumber* value = [NSNumber numberWithUnsignedInt: kCVPixelFormatType_32BGRA]; //kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange];
[cameraVideoSettings setValue: value forKey: key];
[videoOutput setVideoSettings: cameraVideoSettings];
[videoOutput setMinFrameDuration: CMTimeMake(20, 600)]; //CMTimeMake(1, 30)]; // 30fps
[videoOutput setAlwaysDiscardsLateVideoFrames: YES];
queue = dispatch_queue_create("cameraQueue", NULL);
[videoOutput setSampleBufferDelegate: self queue: queue];
dispatch_release(queue);
NSMutableDictionary *outputSettings = [[[NSMutableDictionary alloc] init] autorelease];
[outputSettings setValue: AVVideoCodecH264 forKey: AVVideoCodecKey];
[outputSettings setValue: [NSNumber numberWithInt: 1280] forKey: AVVideoWidthKey]; // currently assuming
[outputSettings setValue: [NSNumber numberWithInt: 720] forKey: AVVideoHeightKey];
NSMutableDictionary *compressionSettings = [[[NSMutableDictionary alloc] init] autorelease];
[compressionSettings setValue: AVVideoProfileLevelH264Main30 forKey: AVVideoProfileLevelKey];
//[compressionSettings setValue: [NSNumber numberWithDouble:1024.0*1024.0] forKey: AVVideoAverageBitRateKey];
[outputSettings setValue: compressionSettings forKey: AVVideoCompressionPropertiesKey];
inputWriterBuffer = [AVAssetWriterInput assetWriterInputWithMediaType: AVMediaTypeVideo outputSettings: outputSettings];
[inputWriterBuffer retain];
inputWriterBuffer.expectsMediaDataInRealTime = YES;
outputWriter = [AVAssetWriter assetWriterWithURL: [projectPaths movieURLPath] fileType: AVFileTypeQuickTimeMovie error: &error];
[outputWriter retain];
if (error) NSLog(#"error for outputWriter = [AVAssetWriter assetWriterWithURL:fileType:error:");
if ([outputWriter canAddInput: inputWriterBuffer]) [outputWriter addInput: inputWriterBuffer];
else NSLog(#"can not add input");
if (![outputWriter canApplyOutputSettings: outputSettings forMediaType:AVMediaTypeVideo]) NSLog(#"ouptutSettings are NOT supported");
if ([captureSession canAddOutput: videoOutput]) [self.captureSession addOutput: videoOutput];
else NSLog(#"could not addOutput: videoOutput to captureSession");
//[self.captureSession startRunning];
self.isRecording = YES;
[recordingStarStop setTitle: #"Stop" forState: UIControlStateNormal];
writtenFrames = 0;
imageSourceTime = kCMTimeZero;
[outputWriter startWriting];
//[outputWriter startSessionAtSourceTime: imageSourceTime];
NSLog(#"~~~~~~~~~ STARTED RECORDING ~~~~~~~~~");
NSLog (#"recording to fileURL: %#", [projectPaths movieURLPath]);
}
NSLog(#"isRecording: %#", self.isRecording ? #"YES" : #"NO");
[self displayOuptutWritterStatus];
}
OK, I found the bug in my first post.
When using
myStatus = CMSampleBufferCreateCopyWithNewTiming(kCFAllocatorDefault,
sampleBuffer,
1,
&sampleTimingInfo,
&newSampleBuffer);
you need to balance that with a CFRelease(newSampleBuffer);
The same idea holds true when using a CVPixelBufferRef with a piexBufferPool of an AVAssetWriterInputPixelBufferAdaptor instance. You would use CVPixelBufferRelease(yourCVPixelBufferRef); after calling the appendPixelBuffer: withPresentationTime: method.
Hope this is helpful to someone else.
With a little more searching and reading I have a working solution. Don't know that it is best method, but so far, so good.
In my setup area I've setup an AVAssetWriterInputPixelBufferAdaptor. The code addition looks like this.
InputWriterBufferAdaptor = [AVAssetWriterInputPixelBufferAdaptor
assetWriterInputPixelBufferAdaptorWithAssetWriterInput: inputWriterBuffer
sourcePixelBufferAttributes: nil];
[inputWriterBufferAdaptor retain];
For completeness to understand the code below, I also have these three lines in the setup method.
fpsOutput = 30; //Some possible values: 30, 10, 15 24, 25, 30/1.001 or 29.97;
cmTimeSecondsDenominatorTimescale = 600 * 100000; //To more precisely handle 29.97.
cmTimeNumeratorValue = cmTimeSecondsDenominatorTimescale / fpsOutput;
Instead of applying a retiming to a copy of the sample buffer. I now have the following three lines of code that effectively does the same thing. Notice the withPresentationTime parameter for the adapter. By passing my custom value to that, I gain the correct timing I'm seeking.
CVPixelBufferRef myImage = CMSampleBufferGetImageBuffer( sampleBuffer );
imageSourceTime = CMTimeMake( writtenFrames * cmTimeNumeratorValue, cmTimeSecondsDenominatorTimescale);
appendSuccessFlag = [inputWriterBufferAdaptor appendPixelBuffer: myImage withPresentationTime: imageSourceTime];
Use of the AVAssetWriterInputPixelBufferAdaptor.pixelBufferPool property may have some gains, but I haven't figured that out.