Is there a way to determine the bit-rate of the stream an MPMovieController is playing?
I am programming in objective-c on iOS
You can get the indicated bit rate from event, which is the bit rate of the stream according to the m3u8. To calculate the actual bit rate I divide event.numberOfBytesTransferred / event.durationWatched and multiply by 8.
NSArray *events = self.player.accessLog.events;
MPMovieAccessLogEvent *event = (MPMovieAccessLogEvent *)[events lastObject];
double calculatedBitRate = 8 * event.numberOfBytesTransferred / event.durationWatched;
value = [nf stringFromNumber:[NSNumber numberWithDouble:calculatedBitRate]];
self.calculatedBitRateLabel.text = [NSString stringWithFormat:#"My calculated bit rate = %#", value];
Found it, the "accessLog" gives you periodic stats which include the observed bitrate:
MPMovieAccessLogEvent *evt=nil;
MPMovieAccessLog *accessL=[moviePlayer accessLog];
NSArray *events = accessL.events;
for (int i=0; i<[events count]; i++) {
evt=[events objectAtIndex:i];
}
return evt.observedBitrate
Related
I've taken a look around, and there aren't that many talks or examples on inertial navigation for iOS5. I know that iOS5 introduced some very cool sensor fusion algorithms:
motionQueue = [[NSOperationQueue alloc] init];
[motionQueue setMaxConcurrentOperationCount:1];
motionManager = [[CMMotionManager alloc] init];
motionManager.deviceMotionUpdateInterval = 1/20.0;
[motionManager startDeviceMotionUpdatesUsingReferenceFrame:CMAttitudeReferenceFrameXTrueNorthZVertical toQueue:motionQueue withHandler:^(CMDeviceMotion *motion, NSError *error) {
}];
I've taken a look at both videos from WWDC that deal with the block above.
The resulting CMDeviceMotion contains a device attitude vector, along with the acceleration separated from the user induced acceleration.
Are there any open source inertial navigation projects specifically for iOS 5 that take advantage of this new sensor fusion ? I'm talking about further integrating this data with the GPS and magnetometer output to get a more accurate GPS position.
A bonus question: is this kind of fusion even possible from a hardware standpoint? Will I melt my iPhone4 if I start to do 20hz processing of all available sensor data over extended periods of time?
I'm ready to start tinkering with these, but would love to get something more solid to start with than the empty block above :)
Thank you for any pointers!
I am writing an app for scuba divers and hoped to add inertial navigation since GPS and other radio based navigation is unavailable underwater. I did quite a bit of research and found that there is just too much jitter in the sensor data on the iPhone for accurate inertial navigation. I did a quick experiment and found that even when the device is perfectly still, the "drift" due to noise in the signal showed that the device "moved" many meters after only a few minutes. Here is the code I used in my experiment. If you can see something I am doing wrong, let me know. Otherwise, I want my afternoon back!
- (void)startCoreMotion {
CMMotionManager *manager = [[CMMotionManager alloc] init];
if ([manager isAccelerometerAvailable]) {
manager.deviceMotionUpdateInterval = 1.0/updateHz;
[manager startDeviceMotionUpdatesToQueue:[NSOperationQueue mainQueue] withHandler:^(CMDeviceMotion *motion, NSError *error) {
xVelocity += (( 9.8 * motion.userAcceleration.x ) / updateHz);
yVelocity += (( 9.8 * motion.userAcceleration.y ) / updateHz);
zVelocity += (( 9.8 * motion.userAcceleration.z ) / updateHz);
xPosition += ( xVelocity * ( 1 / updateHz ));
yPosition += ( yVelocity * ( 1 / updateHz ));
zPosition += ( zVelocity * ( 1 / updateHz ));
self.xPositionLabel.text = [NSString stringWithFormat:#"x = %f m", xPosition];
self.yPositionLabel.text = [NSString stringWithFormat:#"y = %f m", yPosition];
self.zPositionLabel.text = [NSString stringWithFormat:#"z = %f m", zPosition];
self.xVelocityLabel.text = [NSString stringWithFormat:#"vx = %f m/s", xVelocity];
self.yVelocityLabel.text = [NSString stringWithFormat:#"vy = %f m/s", yVelocity];
self.zVelocityLabel.text = [NSString stringWithFormat:#"vz = %f m/s", zVelocity];
self.distanceLabel.text = [NSString stringWithFormat:#"dist = %f m", sqrt( pow(xPosition, 2) + pow(yPosition, 2) + pow(zPosition, 2))];
}];
}
}
If I have formatting for a textfield like:
//Formats the textfield based on the pickers.
- (void)pickerView:(UIPickerView *)pickerView didSelectRow:(NSInteger)row inComponent:(NSInteger)component {
NSString *result = [feetArray objectAtIndex:[feetPicker selectedRowInComponent:0]];
result = [result stringByAppendingFormat:#"%#ft", [feetArray objectAtIndex:[feetPicker selectedRowInComponent:1]]];
result = [result stringByAppendingFormat:#" %#", [inchArray objectAtIndex:[inchesPicker selectedRowInComponent:0]]];
result = [result stringByAppendingFormat:#"%#", [inchArray objectAtIndex:[inchesPicker selectedRowInComponent:1]]];
result = [result stringByAppendingFormat:#" %#in", [fractionArray objectAtIndex:[fractionPicker selectedRowInComponent:0]]];
myTextField.text = result;
}
Which display's in the textfield like 00ft 00 0/16in How can I change that all to inches with decimal? I'll need to take the ft, and multiply by 12 = variable.Then add that to inches, as well as take my fraction 1/16 and divide that by 16 to get my decimal value and then add that to the inches so it shows like 1234.0625 in order to make my calculation. Can someone help me accomplish this? Thank you in advance!
NSString * theString = RiseTextField.text;
NSString * feetString = [theString substringWithRange:NSMakeRange(0, 2)];
NSString * inchesString = [theString substringWithRange:NSMakeRange(5, 2)];
NSUInteger rangeLength = ([theString length] == 14) ? 1 : 2;
NSString * fractionString = [theString substringWithRange:NSMakeRange(8, rangeLength)];
double totalInInches = [feetString doubleValue] * 12 + [inchesString doubleValue] + [fractionString doubleValue] / 16;
You can easily get the number that you want by doing the calculations with the numbers you have there. Once you've got the actual number, you should use a NSNumberFormatter to present it with the desired amount of decimals and format.
This should solve your problem. Or did you need help converting the strings to numbers so that you can add them together?
I would like to extract a channel audio from the an LPCM raw file ie extract left and right channel of a stereo LPCM file. The LPCM is 16 bit depth,interleaved, 2 channels,litle endian. From what I gather the order of byte is {LeftChannel,RightChannel,LeftChannel,RightChannel...} and since it is 16 bit depth there will be 2 bytes of sample for each channel right?
So my question is if i want to extract the left channel then I would take the bytes in 0,2,4,6...n*2 address? while the right channel would be 1,3,4,...(n*2+1).
Also after extracting the audio channel, should i set the format of the extracted channel as 16 bit depth ,1 channel?
Thanks in advance
This is the code that I currently use to extract PCM audio from AssetReader.. This code works fine with writing a music file without its channel being extracted so I it might be caused by the format or something...
NSURL *assetURL = [song valueForProperty:MPMediaItemPropertyAssetURL];
AVURLAsset *songAsset = [AVURLAsset URLAssetWithURL:assetURL options:nil];
NSDictionary *outputSettings = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithInt:kAudioFormatLinearPCM], AVFormatIDKey,
[NSNumber numberWithFloat:44100.0], AVSampleRateKey,
[NSNumber numberWithInt:2], AVNumberOfChannelsKey,
// [NSData dataWithBytes:&channelLayout length:sizeof(AudioChannelLayout)], AVChannelLayoutKey,
[NSNumber numberWithInt:16], AVLinearPCMBitDepthKey,
[NSNumber numberWithBool:NO], AVLinearPCMIsNonInterleaved,
[NSNumber numberWithBool:NO],AVLinearPCMIsFloatKey,
[NSNumber numberWithBool:NO], AVLinearPCMIsBigEndianKey,
nil];
NSError *assetError = nil;
AVAssetReader *assetReader = [[AVAssetReader assetReaderWithAsset:songAsset
error:&assetError]
retain];
if (assetError) {
NSLog (#"error: %#", assetError);
return;
}
AVAssetReaderOutput *assetReaderOutput = [[AVAssetReaderAudioMixOutput
assetReaderAudioMixOutputWithAudioTracks:songAsset.tracks
audioSettings: outputSettings]
retain];
if (! [assetReader canAddOutput: assetReaderOutput]) {
NSLog (#"can't add reader output... die!");
return;
}
[assetReader addOutput: assetReaderOutput];
NSArray *dirs = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *documentsDirectoryPath = [dirs objectAtIndex:0];
//CODE TO SPLIT STEREO
[self setupAudioWithFormatMono:kAudioFormatLinearPCM];
NSString *splitExportPath = [[documentsDirectoryPath stringByAppendingPathComponent:#"monoleft.caf"] retain];
if ([[NSFileManager defaultManager] fileExistsAtPath:splitExportPath]) {
[[NSFileManager defaultManager] removeItemAtPath:splitExportPath error:nil];
}
AudioFileID mRecordFile;
NSURL *splitExportURL = [NSURL fileURLWithPath:splitExportPath];
OSStatus status = AudioFileCreateWithURL(splitExportURL, kAudioFileCAFType, &_streamFormat, kAudioFileFlags_EraseFile,
&mRecordFile);
NSLog(#"status os %d",status);
[assetReader startReading];
CMSampleBufferRef sampBuffer = [assetReaderOutput copyNextSampleBuffer];
UInt32 countsamp= CMSampleBufferGetNumSamples(sampBuffer);
NSLog(#"number of samples %d",countsamp);
SInt64 countByteBuf = 0;
SInt64 countPacketBuf = 0;
UInt32 numBytesIO = 0;
UInt32 numPacketsIO = 0;
NSMutableData * bufferMono = [NSMutableData new];
while (sampBuffer) {
AudioBufferList audioBufferList;
CMBlockBufferRef blockBuffer;
CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(sampBuffer, NULL, &audioBufferList, sizeof(audioBufferList), NULL, NULL, 0, &blockBuffer);
for (int y=0; y<audioBufferList.mNumberBuffers; y++) {
AudioBuffer audioBuffer = audioBufferList.mBuffers[y];
//frames = audioBuffer.mData;
NSLog(#"the number of channel for buffer number %d is %d",y,audioBuffer.mNumberChannels);
NSLog(#"The buffer size is %d",audioBuffer.mDataByteSize);
//Append mono left to buffer data
for (int i=0; i<audioBuffer.mDataByteSize; i= i+4) {
[bufferMono appendBytes:(audioBuffer.mData+i) length:2];
}
//the number of bytes in the mutable data containing mono audio file
numBytesIO = [bufferMono length];
numPacketsIO = numBytesIO/2;
NSLog(#"numpacketsIO %d",numPacketsIO);
status = AudioFileWritePackets(mRecordFile, NO, numBytesIO, &_packetFormat, countPacketBuf, &numPacketsIO, audioBuffer.mData);
NSLog(#"status for writebyte %d, packets written %d",status,numPacketsIO);
if(numPacketsIO != (numBytesIO/2)){
NSLog(#"Something wrong");
assert(0);
}
countPacketBuf = countPacketBuf + numPacketsIO;
[bufferMono setLength:0];
}
sampBuffer = [assetReaderOutput copyNextSampleBuffer];
countsamp= CMSampleBufferGetNumSamples(sampBuffer);
NSLog(#"number of samples %d",countsamp);
}
AudioFileClose(mRecordFile);
[assetReader cancelReading];
[self performSelectorOnMainThread:#selector(updateCompletedSizeLabel:)
withObject:0
waitUntilDone:NO];
The output format with audiofileservices is as follows:
_streamFormat.mFormatFlags = kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsPacked;
_streamFormat.mBitsPerChannel = 16;
_streamFormat.mChannelsPerFrame = 1;
_streamFormat.mBytesPerPacket = 2;
_streamFormat.mBytesPerFrame = 2;// (_streamFormat.mBitsPerChannel / 8) * _streamFormat.mChannelsPerFrame;
_streamFormat.mFramesPerPacket = 1;
_streamFormat.mSampleRate = 44100.0;
_packetFormat.mStartOffset = 0;
_packetFormat.mVariableFramesInPacket = 0;
_packetFormat.mDataByteSize = 2;
Sounds almost right - you have a 16 bit depth, so that means each sample will take 2 bytes. That means the left channel data will be in bytes {0,1}, {4,5}, {8,9} and so on. Interleaved means the samples are interleaved, not the bytes.
Other than that I would try it out and see if you have any problems with your code.
Also after extracting the audio
channel, should i set the format of
the extracted channel as 16 bit depth
,1 channel?
Only one of the two channels is remaining after your extraction, so yes, this is correct.
I had a similar error that the audio sounded 'slow', the reason for this is that you specified mChannelsPerFrame of 1, whereas you have a dual channel sound. Set it to 2 and it should speed up the playback. Also do tell if after you do this the output 'sounds' correctly... :)
I'm trying to split my stereo audio into two mono files (split stereo audio to mono streams on iOS). I've been using your code but can't seem to get it to work. Whats the contents of your setupAudioWithFormatMono method?
Hi Stack Overflowers, I'm hoping you might be able to help me.
I'm trying to get a collection of all songs in a users iPhone music library from a specific year i.e 2002.
I'm then looking to play the songs through a MPMusicPlayerController.
It seems that you can't set up a MPMediaPropertyPredicate to filter by release date which I think rules that out. What I don't really want to do is have to get a full array of all track release dates and then iterate all the NSDates as I (perhaps wrongly) expect this could be quite slow for large libraries.
What is the best way of achieving this task?
Thanks in advance.
here is what you are looking for.
allMedia = [MPMediaQuery songsQuery];
//MPMediaPropertyPredicate *mpp1 = [MPMediaPropertyPredicate predicateWithValue:#"2" forProperty:MPMediaItemPropertyRating comparisonType:MPMediaPredicateComparisonEqualTo];
//MPMediaPropertyPredicate *mpp2 = [MPMediaPropertyPredicate predicateWithValue:#"Pop" forProperty:MPMediaItemPropertyGenre comparisonType:MPMediaPredicateComparisonContains];
//[allMedia addFilterPredicate:mpp1];
//[allMedia addFilterPredicate:mpp2];
//[myPlayer setQueueWithQuery:allMedia];
NSArray *itemsFromGenericQuery = [allMedia items];
NSMutableArray *mArray = [[NSMutableArray alloc] init];
int i = 0;
int j=0;
NSLog(#"itemCount: %d",[itemsFromGenericQuery count]);
float playsQuery = sliderPlays.value;
if(playsQuery == 20){playsQuery = 10000;}
NSLog(#"sliderRating.value %f sliderPlays.value %.1f", [sliderRating value], playsQuery);
while(i++ < 1000){
int trackNumber = arc4random() % [itemsFromGenericQuery count];
MPMediaItem *song = [itemsFromGenericQuery objectAtIndex:trackNumber];
NSString *artistName = [song valueForProperty: MPMediaItemPropertyArtist];
NSString *title = [song valueForProperty: MPMediaItemPropertyTitle];
NSString *rating = [song valueForKey:MPMediaItemPropertyRating];
double length = [[song valueForProperty:MPMediaItemPropertyPlaybackDuration] doubleValue];
NSNumber *year = [song valueForProperty:MPMediaItemPropertyYear];
if ([year intValue] >= [def intValue] <= {
if(j++ > 50){break;}
NSLog (#"tracknumber: %d j: %d artistName: %# title: %# length: %# year: %# playcount: %d",trackNumber, j, artistName, title, length, rating, [playCount intValue]);
[mArray addObject:song];
}
if(i++ > 1000)break;
}
MPMediaItemCollection *itemCol = [[MPMediaItemCollection alloc] initWithItems:mArray];
[myPlayer setQueueWithItemCollection:itemCol];
[myPlayer setShuffleMode: MPMusicShuffleModeSongs];
[myPlayer setRepeatMode: MPMusicRepeatModeNone];
I have an NSArray of NSNumbers and want to find the maximum value in the array. Is there any built in functionality for doing so? I am using iOS4 GM if that makes any difference.
The KVC approach looks like this:
int max = [[numbers valueForKeyPath:#"#max.intValue"] intValue];
or
NSNumber * max = [numbers valueForKeyPath:#"#max.intValue"];
with numbers as an NSArray
NSArray * test= #[#3, #67, #23, #67, #67];
int maximumValue = [[test valueForKeyPath: #"#max.self"] intValue];
NSLog(#" MaximumValue = %d", maximumValue);
// Maximum = 67
Here is the swift version
let maxValue = (numbers.value(forKeyPath: "#max.self") as! Double)
Hope will helpful to you.
NSArray * arrayOfBarGraphValues = #[#65, #45, #47 ,#87 , #46, #66 ,#77 ,#47 ,#79 ,#78 ,#87 ,#78 ,#87 ];
int maxOfBarGraphValues = [[arrayOfBarGraphValues valueForKeyPath: #"#max.self"] intValue];
NSLog(#" MaximumValue Of BarGraph = %d", maxOfBarGraphValues);