Is there any way to record onto the end of an audio file? We can't just pause the recording instead of stopping it, because the user needs to be able to come back to the app later and add more audio to their recording. Currently, the audio is stored in CoreData as NSData. NSData's AppendData does not work because the resulting audio file still reports that it is only as long as the original data.
Another possibility would be taking the original audio file, along with the new one, and concatenate them into one audio file, if there's any way to do that.
This can be done fairly easily using AVMutableComposionTrack insertTimeRange:ofTrack:atTime:error. The code is somewhat lengthy, but it's really just like 4 steps:
// Create a new audio track we can append to
AVMutableComposition* composition = [AVMutableComposition composition];
AVMutableCompositionTrack* appendedAudioTrack =
[composition addMutableTrackWithMediaType:AVMediaTypeAudio
preferredTrackID:kCMPersistentTrackID_Invalid];
// Grab the two audio tracks that need to be appended
AVURLAsset* originalAsset = [[AVURLAsset alloc]
initWithURL:[NSURL fileURLWithPath:originalAudioPath] options:nil];
AVURLAsset* newAsset = [[AVURLAsset alloc]
initWithURL:[NSURL fileURLWithPath:newAudioPath] options:nil];
NSError* error = nil;
// Grab the first audio track and insert it into our appendedAudioTrack
AVAssetTrack *originalTrack = [originalAsset tracksWithMediaType:AVMediaTypeAudio];
CMTimeRange timeRange = CMTimeRangeMake(kCMTimeZero, originalAsset.duration);
[appendedAudioTrack insertTimeRange:timeRange
ofTrack:[originalTrack objectAtIndex:0]
atTime:kCMTimeZero
error:&error];
if (error)
{
// do something
return;
}
// Grab the second audio track and insert it at the end of the first one
AVAssetTrack *newTrack = [newAsset tracksWithMediaType:AVMediaTypeAudio];
timeRange = CMTimeRangeMake(kCMTimeZero, newAsset.duration);
[appendedAudioTrack insertTimeRange:timeRange
ofTrack:[newTrack objectAtIndex:0]
atTime:originalAsset.duration
error:&error];
if (error)
{
// do something
return;
}
// Create a new audio file using the appendedAudioTrack
AVAssetExportSession* exportSession = [AVAssetExportSession
exportSessionWithAsset:composition
presetName:AVAssetExportPresetAppleM4A];
if (!exportSession)
{
// do something
return;
}
NSString* appendedAudioPath= #""; // make sure to fill this value in
exportSession.outputURL = [NSURL fileURLWithPath:appendedAudioPath];
exportSession.outputFileType = AVFileTypeAppleM4A;
[exportSession exportAsynchronouslyWithCompletionHandler:^{
// exported successfully?
switch (exportSession.status)
{
case AVAssetExportSessionStatusFailed:
break;
case AVAssetExportSessionStatusCompleted:
// you should now have the appended audio file
break;
case AVAssetExportSessionStatusWaiting:
break;
default:
break;
}
NSError* error = nil;
}];
You can append two audio files by creating a AVMutableCompositionTrack after adding the two files and exporting the composition using exportAsynchronouslyWithCompletionHandler method of AVAssetExportSession.
Please refer below links for more details.
AVAssetExportSession Class Reference
Creating New Assets
Hope this helps to solve your issue.
I don't have a complete code example but the Extended Audio File Services can help you concatenate two audio files. Search for Extended Audio File Services in Xcode or visit the link below.
Apple documentation
We had the same requirements for our app as the OP described, and ran into the same issues (i.e., the recording has to be stopped, instead of paused, if the user wants to listen to what she has recorded up to that point). Our app (project's Github repo) uses AVQueuePlayer for playback and a method similar to kermitology's answer to concatenate the partial recordings, with some notable differences:
implemented in Swift
concatenates multiple recordings into one
no messing with tracks
The rationale behind the last item is that simple recordings with AVAudioRecorder will have one track, and the main reason for this whole workaround is to concatenate those single tracks in the assets (see Addendum 3). So why not use AVMutableComposition's insertTimeRange method instead, that takes an AVAsset instead of an AVAssetTrack?
Relevant parts: (full code)
import UIKit
import AVFoundation
class RecordViewController: UIViewController {
/* App allows volunteers to record newspaper articles for the
blind and print-impaired, hence the name.
*/
var articleChunks = [AVURLAsset]()
func concatChunks() {
let composition = AVMutableComposition()
/* `CMTimeRange` to store total duration and know when to
insert subsequent assets.
*/
var insertAt = CMTimeRange(start: kCMTimeZero, end: kCMTimeZero)
repeat {
let asset = self.articleChunks.removeFirst()
let assetTimeRange =
CMTimeRange(start: kCMTimeZero, end: asset.duration)
do {
try composition.insertTimeRange(assetTimeRange,
of: asset,
at: insertAt.end)
} catch {
NSLog("Unable to compose asset track.")
}
let nextDuration = insertAt.duration + assetTimeRange.duration
insertAt = CMTimeRange(start: kCMTimeZero, duration: nextDuration)
} while self.articleChunks.count != 0
let exportSession =
AVAssetExportSession(
asset: composition,
presetName: AVAssetExportPresetAppleM4A)
exportSession?.outputFileType = AVFileType.m4a
exportSession?.outputURL = /* create URL for output */
// exportSession?.metadata = ...
exportSession?.exportAsynchronously {
switch exportSession?.status {
case .unknown?: break
case .waiting?: break
case .exporting?: break
case .completed?: break
case .failed?: break
case .cancelled?: break
case .none: break
}
}
/* Clean up (delete partial recordings, etc.) */
}
This diagram helped me to get around what expects what and inherited from where. (NSObject is implicitly implied as superclass where there is no inheritance arrow.)
Addendum 1: I had my reservations regarding the switch part instead of using KVO on AVAssetExportSessionStatus, but the docs are clear that exportAsynchronously's callback block "is invoked when writing is complete or in the event of writing failure".
Addendum 2: Just in case if someone has issues with AVQueuePlayer: 'An AVPlayerItem cannot be associated with more than one instance of AVPlayer'
Addendum 3: Unless you are recording in stereo, but mobile devices have one input as far as I know. Also, using fancy audio mixing would also require the use of AVCompositionTrack. A good SO thread: Proper AVAudioRecorder Settings for Recording Voice?
Related
I'm attempting to play an AVAudioFile using the AVAudioEngine. The code is largely taken and adapted from the Apple Developer on-line videos, but there is no playback. Have spent some time going through the forums, but nothing seems to throw any light on it.
I have two methods. The first one calls the standard open file dialog, opens the audio file, allocates an AVAudioBuffer object (which I will use later) and fills it with the audio data. The second one sets up the AVAudioEngine and AVAudioPlayerNode objects, connects everything up and plays the file. The two methods are listed below .
- (IBAction)getSoundFileAudioData:(id)sender {
NSOpenPanel* openPanel = [NSOpenPanel openPanel];
openPanel.title = #"Choose a .caf file";
openPanel.showsResizeIndicator = YES;
openPanel.showsHiddenFiles = NO;
openPanel.canChooseDirectories = NO;
openPanel.canCreateDirectories = YES;
openPanel.allowsMultipleSelection = NO;
openPanel.allowedFileTypes = #[#"caf"];
[openPanel beginWithCompletionHandler:^(NSInteger result){
if (result == NSFileHandlingPanelOKButton) {
NSURL* theAudioFileURL = [[openPanel URLs] objectAtIndex:0];
// Open the document.
theAudioFile = [[AVAudioFile alloc] initForReading:theAudioFileURL error:nil];
AVAudioFormat *format = theAudioFile.processingFormat;
AVAudioFrameCount capacity = (AVAudioFrameCount)theAudioFile.length;
theAudioBuffer = [[AVAudioPCMBuffer alloc]
initWithPCMFormat:format frameCapacity:capacity];
NSError *error;
if (![theAudioFile readIntoBuffer:theAudioBuffer error:&error]) {
NSLog(#"problem filling buffer");
}
else
playOrigSoundFileButton.enabled = true;
}
}];}
- (IBAction)playSoundFileAudioData:(id)sender {
AVAudioEngine *engine = [[AVAudioEngine alloc] init]; // set up the audio engine
AVAudioPlayerNode *player = [[AVAudioPlayerNode alloc] init]; // set up a player node
[engine attachNode:player]; // attach player node to engine
AVAudioMixerNode *mixer = [engine mainMixerNode];
[engine connect:player to:mixer format:theAudioFile.processingFormat]; // connect player node to mixer
[engine connect:player to:mixer format:[mixer outputFormatForBus:0]]; // connect player node to mixer
NSError *error;
if (![engine startAndReturnError:&error]) {
NSLog(#"Problem starting engine ");
}
else {
[player scheduleFile:theAudioFile atTime: nil completionHandler:nil];
[player play];
}
}
I have checked that the functions are being executed, that the audio engine is running and the audio file has been read. I have also tried with different audio file formats, both mono and stereo. Any ideas?
I'm running Xcode 7.3.1.
declare your AVAudioPlayerNode *player as global
refer this link Can't play file from documents in AVAudioPlayer
I would like to asynchronously load the duration, time (timestamp the video was created) and locale of an Asset.
All of the sample code shown by Apple for the usage of 'loadValuesAsynchronouslyForKeys:keys' is always shows with only one key. ie:
NSURL *url = aUrl;
AVAsset asset = [[AVURLAsset alloc] initWithURL:url options:nil];
NSArray *keys = [NSArray arrayWithObject:#"duration"];
[asset loadValuesAsynchronouslyForKeys:keys completionHandler:^() {
NSError *error = nil;
AVKeyValueStatus durationStatus = [asset statusOfValueForKey:#"duration" error:&error];
switch (durationSatus) {
case AVKeyValueStatusLoaded:
// Read duration from asset
CMTime assetDurationInCMTime = [asset duration];
break;
case AVKeyValueStatusFailed:
// Report error
break;
case AVKeyValueStatusCancelled:
// Do whatever is appropriate for cancelation
}
}];
Can I assume that if one item's status is 'AVKeyValueStatusLoaded', the other values can be read at the same time in the completion block? ie:
[asset tracks]
[asset commonMetadata];
[asset duration]
No, you can't assume that. One of my methods looks at two keys, playable and duration, and I have found that playable is often available while duration isn't yet. I therefor have moved the loadValuesAsynchronouslyForKeys: code into a separate method shouldSave:. The shouldSave: method I call from a timer in a method called saveWithDuration:. Once saveWithDuration: receives a non-zero duration, it goes ahead and saves stuff. To avoid waiting too long, I use an attempt counter for now -- in the future, I'll refine this (you'll notice that the error instance isn't really used at the moment)
- (void)shouldSave:(NSTimer*)theTimer {
NSString * const TBDuration = #"duration";
NSString * const TBPlayable = #"playable";
__block float theDuration = TBZeroDuration;
__block NSError *error = nil;
NSArray *assetKeys = [NSArray arrayWithObjects:TBDuration, TBPlayable, nil];
[_audioAsset loadValuesAsynchronouslyForKeys:assetKeys completionHandler:^() {
AVKeyValueStatus playableStatus = [_audioAsset statusOfValueForKey:TBPlayable error:&error];
switch (playableStatus) {
case AVKeyValueStatusLoaded:
//go on
break;
default:
return;
}
AVKeyValueStatus durationStatus = [_audioAsset statusOfValueForKey:TBDuration error:&error];
switch (durationStatus) {
case AVKeyValueStatusLoaded:
theDuration = CMTimeGetSeconds(_audioAsset.duration);
break;
default:
return;
}
}];
NSUInteger attempt = [[[theTimer userInfo] objectForKey:TBAttemptKey] integerValue];
attempt++;
[self saveWithDuration:theDuration attempt:attempt error:&error];
}
Technically you can't. The docs for loadValuesAsynchronouslyForKeys:completionHandler: says that
The completion states of the keys you
specify in keys are not necessarily
the same—some may be loaded, and
others may have failed. You must check
the status of each key individually.
In practice, I think this is often a safe assumption -- as you've noted, Apple's StitchedStreamPlayer sample project just looks at the status of the first key.
No you cannot assume so. I usually rely on #"duration" key to create an AVPlayerItem and start playback since loading of #"playable" generally doesn't guarantee that the asset is ready yet. Then I spawn a timer to check periodically whether others keys such as #"tracks" are loaded or not similar to what Elise van Looij has mentioned above.
Also, side note - do remember that the completionHandler in loadValuesAsynchronouslyForKeys is called on an arbitrary background thread. You will have to dispatch it to main thread if you are dealing with UI or AVPlayerLayer.
I am developing a video app for iPhone. I am recording a video and saving it to iPhone Camera Roll using AssetsLibrary framework. The API that I have used is:
- (void)writeVideoAtPathToSavedPhotosAlbum:(NSURL *)videoPathURL
completionBlock:(ALAssetsLibraryWriteVideoCompletionBlock)completionBlock
Is there any way to save custom metadata of the video to the Camera Roll using ALAsset. If this is not possible using AssetsLibrary framework, can this be done using some other method. Basically I am interested in writing details about my app as a part of the video metadata.
Since iOS 4+ there is the AVFoundation framework, which also lets you read/write metadata from/to video files. There are only specific keys that you can use to add metadata using this option, but I don't believe it would be a problem.
Here's a small example that you can use to add a title to your videos (however, in this example all older metadata is removed):
// prepare metadata (add title "title")
NSMutableArray *metadata = [NSMutableArray array];
AVMutableMetadataItem *mi = [AVMutableMetadataItem metadataItem];
mi.key = AVMetadataCommonKeyTitle;
mi.keySpace = AVMetadataKeySpaceCommon;
mi.value = #"title";
[metadata addObject:mi];
// prepare video asset (SOME_URL can be an ALAsset url)
AVURLAsset *videoAsset = [[AVURLAsset alloc] initWithURL:SOME_URL options:nil];
// prepare to export, without transcoding if possible
AVAssetExportSession *_videoExportSession = [[AVAssetExportSession alloc] initWithAsset:videoAsset presetName:AVAssetExportPresetPassthrough];
[videoAsset release];
_videoExportSession.outputURL = [NSURL fileURLWithPath:_outputPath];
_videoExportSession.outputFileType = AVFileTypeQuickTimeMovie;
_videoExportSession.metadata = metadata;
[_videoExportSession exportAsynchronouslyWithCompletionHandler:^{
switch ([_videoExportSession status]) {
case AVAssetExportSessionStatusFailed:
NSLog(#"Export failed: %#", [[_videoExportSession error] localizedDescription]);
case AVAssetExportSessionStatusCancelled:
NSLog(#"Export canceled");
default:
break;
}
[_videoExportSession release]; _videoExportSession = nil;
[self finishExport]; //in finishExport you can for example call writeVideoAtPathToSavedPhotosAlbum:completionBlock: to save the video from _videoExportSession.outputURL
}];
This also shows some examples: avmetadataeditor
There is no officially supported way to do this.
What you may do: Store the info you want to save in a separate database. The downside however is that such information is then only available in your app.
What exactly are you trying to accomplish?
You can also set the metadata in the videoWriter so something like =>
NSMutableArray *metadata = [NSMutableArray array];
AVMutableMetadataItem *mi = [AVMutableMetadataItem metadataItem];
mi.key = AVMetadataCommonKeyTitle;
mi.keySpace = AVMetadataKeySpaceCommon;
mi.value = #"title";
[metadata addObject:mi];
videoWriter.metadata = metadata;
where videoWriter is of type AVAssetWriter
and then when you stop recording you call =>
[videoWriter endSessionAtSourceTime:CMTimeMake(durationInMs, 1000)];
[videoWriter finishWritingWithCompletionHandler:^() {
ALAssetsLibrary *assetsLib = [[ALAssetsLibrary alloc] init];
[assetsLib writeVideoAtPathToSavedPhotosAlbum:videoUrl
completionBlock:^(NSURL* assetURL, NSError* error) {
if (error != nil) {
NSLog( #"Video not saved");
}
}];
}];
I would like to know if an MPMediaItem that represents a music track is for a Fairplay/DRM-protected item. Any way to do this?
Here's how I do it:
MPMediaItem* item;
NSURL* assetURL = [item valueForProperty:MPMediaItemPropertyAssetURL];
NSString *title=[item valueForProperty:MPMediaItemPropertyTitle];
if (!assetURL) {
/*
* !!!: When MPMediaItemPropertyAssetURL is nil, it typically means the file
* in question is protected by DRM. (old m4p files)
*/
NSLog(#"%# has DRM",title);
}
Since iOS 4.2 there is a way. There may be a more effective way then the example here (but for my app I needed to create AVPlayerItems anyway).
MPMediaItem item;
NSURL *assetURL = [item valueForProperty:MPMediaItemPropertyAssetURL];
AVPlayerItem *avItem = [[AVPlayerItem alloc] initWithURL:assetURL];
BOOL fairplayed = avItem.asset.hasProtectedContent;
From iOS 4.2 the AVAsset class has a property hasProtectedContent so you can check:
NSURL *assetURL = [item valueForProperty:MPMediaItemPropertyAssetURL];
AVAsset *asset = [AVAsset assetWithURL:assetURL];
if ([asset hasProtectedContent] == NO) {..do your stuff..}
MPMediaItemPropertyAssetURL is not nil on iPhone X running iOS 11 for songs saved offline via Apple Music but AVPlayer is unable to play them since they are DRM protected. The same song returns MPMediaItemPropertyAssetURL nil on iOS 9.
MPMediaItemPropertyAssetURL returns nil for songs added to Library via Apple Music but not available offline - both on iOS 9 & 11.
Thus, #voidStern's answer (and not Justin Kent's) is the correct way to test for DRM-protected asset.
Swift 4 version of voidStern's answer (works perfectly for me on iOS 9 to 11):
let itemUrl = targetMPMediaItem?.value(forProperty: MPMediaItemPropertyAssetURL) as? URL
if itemUrl != nil {
let theAsset = AVAsset(url: itemUrl!)
if theAsset.hasProtectedContent {
//Asset is protected
//Must be played only via MPPlayer
} else {
//Asset is not protected
//Can be played both via AVPlayer & MPPlayer\
}
} else {
//probably the asset is not avilable offline
//Must be played only via MPPlayer
}
Another correct way of checking for DRM-protected asset is by making use of protectedAsset property of MPMediaItem - an answer by #weirdyu. But, this property is available only on iOS 9.2 and above.
Swift 4 solution for this method (works perfectly for me on iOS 9.2 and above):
if #available(iOS 9.2, *) {
if (targetMPMediaItem?.hasProtectedAsset)! {
//asset is protected
//Must be played only via MPMusicPlayer
} else {
//asset is not protected
//Can be played both via AVPlayer & MPMusicPlayer
}
} else {
//Fallback on earlier versions
//Probably use the method explained earlier
}
iOS9.2+:
Please use MPMediaItem "protectedAsset" property
iOS9.2-:
Judge MPMediaItem"assetURL"property is nil or not
Justin Kents' solution works great. I recommend using blocks though or else the UX will suffer if you deal with a bunch of songs:
-(void)checkDRMprotectionForItem:(MPMediaItem*)item OnCompletion:(void (^)(BOOL drmProtected))completionBlock
{
dispatch_async(_drmCheckQueue, ^{
BOOL drmStatus;
NSURL* assetURL = [item valueForProperty:MPMediaItemPropertyAssetURL];
if (!assetURL) {
drmStatus = YES;
}
dispatch_async(dispatch_get_main_queue(), ^{
if (completionBlock) {
completionBlock(drmStatus);
}
});
});
}
Now I'm building on swift 2 for ios 9, I found my code broken using hasProtectedContent or using nil url test. I've found the following code work:
let playerItem = AVPlayerItem(URL: mpMediaItem.assetURL!)
playableByAVPlayer = (playerItem.status == .ReadyToPlay) ? true : false
If the item is not playable by AV Player, then it's a DRM item and should be played by iPod Player (now called SystemMusicPlayer).
I was wondering how to access an MPMediaItem's raw data.
Any ideas?
you can obtain the media item's data in such way:
-(void)mediaItemToData
{
// Implement in your project the media item picker
MPMediaItem *curItem = musicPlayer.nowPlayingItem;
NSURL *url = [curItem valueForProperty: MPMediaItemPropertyAssetURL];
AVURLAsset *songAsset = [AVURLAsset URLAssetWithURL: url options:nil];
AVAssetExportSession *exporter = [[AVAssetExportSession alloc] initWithAsset: songAsset
presetName: AVAssetExportPresetPassthrough];
exporter.outputFileType = #"public.mpeg-4";
NSString *exportFile = [[self myDocumentsDirectory] stringByAppendingPathComponent:
#"exported.mp4"];
NSURL *exportURL = [[NSURL fileURLWithPath:exportFile] retain];
exporter.outputURL = exportURL;
// do the export
// (completion handler block omitted)
[exporter exportAsynchronouslyWithCompletionHandler:
^{
NSData *data = [NSData dataWithContentsOfFile: [[self myDocumentsDirectory]
stringByAppendingPathComponent: #"exported.mp4"]];
// Do with data something
}];
}
This code will work only on ios 4.0 and later
Good luck!
Of course you can access the data of a MPMediaItem. It's not crystal clear at once but it works. Here's how:
Get the media item's URL from it's MPMediaItemPropertyAssetURL property
Initialize an AVURLAsset with this URL
Initialize an AVAssetReader with this asset
Fetch the AVAssetTrack you want to read from the AVURLAsset
Create an AVAssetReaderTrackOutput with this track
Add this output to the AVAssetReader created before and -startReading
Fetch all data with AVAssetReaderTrackOutput's -copyNextSampleBuffer
PROFIT!
Here is some sample code from a project of mine (this is not a code jewel of mine, wrote it some time back in my coding dark ages):
typedef enum {
kEDSupportedMediaTypeAAC = 'aac ',
kEDSupportedMediaTypeMP3 = '.mp3'
} EDSupportedMediaType;
- (EDLibraryAssetReaderStatus)prepareAsset {
// Get the AVURLAsset
AVURLAsset *uasset = [m_asset URLAsset];
// Check for DRM protected content
if (uasset.hasProtectedContent) {
return kEDLibraryAssetReader_TrackIsDRMProtected;
}
if ([uasset tracks] == 0) {
DDLogError(#"no asset tracks found");
return AVAssetReaderStatusFailed;
}
// Initialize a reader with a track output
NSError *err = noErr;
m_reader = [[AVAssetReader alloc] initWithAsset:uasset error:&err];
if (!m_reader || err) {
DDLogError(#"could not create asset reader (%i)\n", [err code]);
return AVAssetReaderStatusFailed;
}
// Check tracks for valid format. Currently we only support all MP3 and AAC types, WAV and AIFF is too large to handle
for (AVAssetTrack *track in uasset.tracks) {
NSArray *formats = track.formatDescriptions;
for (int i=0; i<[formats count]; i++) {
CMFormatDescriptionRef format = (CMFormatDescriptionRef)[formats objectAtIndex:i];
// Check the format types
CMMediaType mediaType = CMFormatDescriptionGetMediaType(format);
FourCharCode mediaSubType = CMFormatDescriptionGetMediaSubType(format);
DDLogVerbose(#"mediaType: %s, mediaSubType: %s", COFcc(mediaType), COFcc(mediaSubType));
if (mediaType == kCMMediaType_Audio) {
if (mediaSubType == kEDSupportedMediaTypeAAC ||
mediaSubType == kEDSupportedMediaTypeMP3) {
m_track = [track retain];
m_format = CFRetain(format);
break;
}
}
}
if (m_track != nil && m_format != NULL) {
break;
}
}
if (m_track == nil || m_format == NULL) {
return kEDLibraryAssetReader_UnsupportedFormat;
}
// Create an output for the found track
m_output = [[AVAssetReaderTrackOutput alloc] initWithTrack:m_track outputSettings:nil];
[m_reader addOutput:m_output];
// Start reading
if (![m_reader startReading]) {
DDLogError(#"could not start reading asset");
return kEDLibraryAssetReader_CouldNotStartReading;
}
return 0;
}
- (OSStatus)copyNextSampleBufferRepresentation:(CMSampleBufferRepresentationRef *)repOut {
pthread_mutex_lock(&m_mtx);
OSStatus err = noErr;
AVAssetReaderStatus status = m_reader.status;
if (m_invalid) {
pthread_mutex_unlock(&m_mtx);
return kEDLibraryAssetReader_Invalidated;
}
else if (status != AVAssetReaderStatusReading) {
pthread_mutex_unlock(&m_mtx);
return kEDLibraryAssetReader_NoMoreSampleBuffers;
}
// Read the next sample buffer
CMSampleBufferRef sbuf = [m_output copyNextSampleBuffer];
if (sbuf == NULL) {
pthread_mutex_unlock(&m_mtx);
return kEDLibraryAssetReader_NoMoreSampleBuffers;
}
CMSampleBufferRepresentationRef srep = CMSampleBufferRepresentationCreateWithSampleBuffer(sbuf);
if (srep && repOut != NULL) {
*repOut = srep;
}
else {
DDLogError(#"CMSampleBufferRef corrupted");
EDCFShow(sbuf);
err = kEDLibraryAssetReader_BufferCorrupted;
}
CFRelease(sbuf);
pthread_mutex_unlock(&m_mtx);
return err;
}
You can't, and there are no workaround. An MPMediaItem is not the actual piece of media, it is just the metadata about the media item communicated to the application via RPC from another process. The data for the item itself is not accessible in your address space.
I should note that even if you have the MPMediaItem its data probably is not loaded into the devices memory. The flash on the iPhone is slow and memory is scarce. While Apple may not want you to have access to the raw data backing an MPMediaItem, it is just as likely that they didn't bother dealing with it because they didn't want to invest the time necessary to deal with the APIs. If they did provide access to such a thing it almost certainly would not be as an NSData, but more likely as an NSURL they would give your application that would allow it to open the file and stream through the data.
In any event, if you want the functionality, you should file a bug report asking for.
Also, as a side note, don't mention your age in a bug report you send to Apple. I think it is very cool you are writing apps for the phone, when I was your age I loved experimenting with computers (back then I was working on things written in Lisp). The thing is you cannot legally agree to a contract in the United States, which is why the developer agreement specifically prohibits you from joining. From the first paragraph of the agreement:
You also certify that you are of the
legal age of majority in the
jurisdiction in which you reside (at
least 18 years of age in many
countries) and you represent that you
are legally permitted to become a
Registered iPhone Developer.
If you mention to a WWDR representative that you are not of age of majority they may realize you are in violation of the agreement and be obligated to terminate your developer account. Just a friendly warning.