I am using GData Api to upload video on youtube from ios application.
It successfully uploads the video but audio is missing from that.
I am using .mp4 format of video.
Do any one has a clue?
Thanks
-(BOOL) setupWriter{
NSError *error = nil;
// NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
// NSString *documentsDirectory = [paths objectAtIndex:0];
// NSURL * url = [NSURL URLWithString:documentsDirectory];
// url = [url URLByAppendingPathComponent:#"om.mp4"];
// NSString *path = [documentsDirectory stringByAppendingPathComponent:#"om.mp4"];
// [data writeToFile:path atomically:YES];
NSString *path = [NSHomeDirectory() stringByAppendingPathComponent:[NSString stringWithFormat:#"Documents/movie.mp4"]];
_videoWriter = [[AVAssetWriter alloc] initWithURL:[NSURL fileURLWithPath:path] fileType:AVFileTypeQuickTimeMovie
error:&error];
NSParameterAssert(_videoWriter);
// Add video input
NSDictionary *videoCompressionProps = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithDouble:128.0*1024.0], AVVideoAverageBitRateKey,
nil ];
NSDictionary *videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:
AVVideoCodecH264, AVVideoCodecKey,
[NSNumber numberWithInt:192], AVVideoWidthKey,
[NSNumber numberWithInt:144], AVVideoHeightKey,
videoCompressionProps, AVVideoCompressionPropertiesKey,
nil];
_videoWriterInput = [[AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo
outputSettings:videoSettings] retain];
float angle = M_PI/2; //rotate 180°, or 1 π radians
_videoWriterInput.transform = CGAffineTransformMakeRotation(angle);
NSParameterAssert(_videoWriterInput);
_videoWriterInput.expectsMediaDataInRealTime = YES;
// Add the audio input
AudioChannelLayout acl;
bzero( &acl, sizeof(acl));
acl.mChannelLayoutTag = kAudioChannelLayoutTag_Mono;
NSDictionary* audioOutputSettings = nil;
// Both type of audio inputs causes output video file to be corrupted.
if( NO ) {
// should work from iphone 3GS on and from ipod 3rd generation
audioOutputSettings = [NSDictionary dictionaryWithObjectsAndKeys:
[ NSNumber numberWithInt: kAudioFormatMPEG4AAC ], AVFormatIDKey,
[ NSNumber numberWithInt: 1 ], AVNumberOfChannelsKey,
[ NSNumber numberWithFloat: 44100.0 ], AVSampleRateKey,
[ NSNumber numberWithInt: 64000 ], AVEncoderBitRateKey,
[ NSData dataWithBytes: &acl length: sizeof( acl ) ], AVChannelLayoutKey,
nil];
} else {
// should work on any device requires more space
audioOutputSettings = [ NSDictionary dictionaryWithObjectsAndKeys:
[ NSNumber numberWithInt: kAudioFormatAppleLossless ], AVFormatIDKey,
[ NSNumber numberWithInt: 16 ], AVEncoderBitDepthHintKey,
[ NSNumber numberWithFloat: 44100.0 ], AVSampleRateKey,
[ NSNumber numberWithInt: 1 ], AVNumberOfChannelsKey,
[ NSData dataWithBytes: &acl length: sizeof( acl ) ], AVChannelLayoutKey,
nil ];
}
_audioWriterInput = [[AVAssetWriterInput
assetWriterInputWithMediaType: AVMediaTypeAudio
outputSettings: audioOutputSettings ] retain];
_audioWriterInput.expectsMediaDataInRealTime = YES;
// add input
[_videoWriter addInput:_videoWriterInput];
[_videoWriter addInput:_audioWriterInput];
return YES;
}
this is setup writer I am using to capture audio is something wring in this????
From the CamStudio Support Forum:
.MOV / .MP4 / .3GPP files
MOV /MP4 /3GPP files are index based. Simply put, this means that
there is an index in the file that tells us the specific location of
where in the file the video and audio frames are present. Without the
index, it is almost impossible to know where the data for a specific
video or audio frame is. This index is contained in what is called a
'moov' atom in the file.
Now, if the index is at the beginning of the file, it will enable
processing of the video as and when successive bytes of the file are
uploaded. On the other hand, if the index is at the end, processing
the video cannot begin until the entire upload is complete - since the
index is needed to interpret the file.
Hence, for MOV / MP4 / 3gpp files, we prefer the "moov" atom in the
beginning of the file - also known as a "fast start"MP4 / MOV file.
There are tools available on the web to flatten your MOV file. Usually
the video editing/export software will have options to create your
files with the moov atom in the beginning rather than the end of your
file. If you are using Apple editing tools, then see this article on
how to produce a "fast start" MP4/MOV file./p>
Here's a list of some well-known formats that YouTube supports:
WebM files - Vp8 video codec and Vorbis Audio codecs
.MPEG4, 3GPP and MOV files - Typically supporting h264, mpeg4 video
codecs, and AAC audio codec
.AVI - Many cameras output this format - typically the video codec is
MJPEG and audio is PCM
.MPEGPS - Typically supporting MPEG2 video codec and MP2 audio
.WMV
.FLV - Adobe-FLV1 video codec, MP3 audio
I have been working with the GData API the last few days and had no similar Problems. I would suggest you checkout the filetypes supported by Youtube: http://www.google.com/support/youtube/bin/answer.py?answer=55744. Can you verify the file you are uploading actually has audio? What does Quicktime say about the Format of your Audio track? Youtube recommends using AAC (as far as I can tell).
Related
I record some Audio file with my iOS application and I send it to an Android application through my web server.
The Android application successfully gets the file but when I use the MediaPlayer class to try and play it, it reports an error "Unable to to create media player" at the line mediaplayer.setDataSource(androidFilePath);
Just to mention, iOS devices can read the files sent with the app.
After some research I found that it may be an encoding issue. But I tried many recording settings provided here on SO but none has worked. Here is the code I use to record the audio file:
AVAudioSession *audioSession = [AVAudioSession sharedInstance];
[audioSession setCategory:AVAudioSessionCategoryRecord error:nil];
NSMutableDictionary *recordSettings = [[NSMutableDictionary alloc] initWithCapacity:10];
[recordSettings NSNumber numberWithInt: kAudioFormatMPEG4AAC] forKey: AVFormatIDKey];
[recordSettings setObject:[NSNumber numberWithFloat:16000.0] forKey: AVSampleRateKey];
[recordSettings setObject:[NSNumber numberWithInt:1] forKey:AVNumberOfChannelsKey];
[recordSettings setObject:[NSNumber numberWithInt: AVAudioQualityMin] forKey: AVEncoderAudioQualityKey];
}
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *basePath = paths[0];
self.audioURL = [NSURL fileURLWithPath:[NSString stringWithFormat:#"%#/recordTest.caf", basePath]];
NSError *error = nil;
self.audioRecorder = [[ AVAudioRecorder alloc] initWithURL:self.audioURL settings:recordSettings error:&error];
if ([self.audioRecorder recordForDuration:60] == YES){
[self.audioRecorder record];
}
Could you just tell me what change do I have to make so that Android devices can read those audio files?
I'd probably try .aac or .mp4 instead of .caf.
Try removing the AVEncoderAudioQualityKey from the code and check if this solves the problem. Also do check if the android device you are checking with are able to play the other aac files properly or not.
No you can't state's all the formats used Check here.
You would have to convert to AAC or WAV before you can use it
I'm using AVAssetReaderTrackOutput to music raw data from iPod library, racing WAV or MP3 file format from iPod library work fine. However, if i read M4A file i will get data with all 0s or garbage... The M4A file i'm reading is playable and i'm sure is not encrypted... It seems to be iOS encrypt or output 0s when you are reading M4A file. Does anyone has any suggestion on how to resolve this issue?
NOTE: If i use AVAssetExportSession to export from m4a file to m4a file then i get correct raw data but this means i have to first save it to a disk (on Apps document folder for example) and then reading it. I'd like to be able to stream it so i need to find a way to read it directly.
Here is my code and i get correct mp3 or wav raw data "except m4a":
NSURL *assetURL = [MediaItem valueForProperty:MPMediaItemPropertyAssetURL];
AVURLAsset *songAsset = [AVURLAsset URLAssetWithURL:assetURL options:nil];
AVAssetTrack* songTrack = [songAsset.tracks objectAtIndex:0];
AVAssetReaderOutput *assetReaderOutput = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:[songAsset.tracks objectAtIndex:0] outputSettings:nil];
NSError *assetError = nil;
AVAssetReader *assetReader = [AVAssetReader assetReaderWithAsset:songAsset
error:&assetError];
[assetReader addOutput: assetReaderOutput];
//Retain
self.mStreamAssetReader = assetReader;
self.mStreamAssetReaderOutput = assetReaderOutput;
Assuming that by "raw music data" you mean raw PCM audio, you should pass in an initialised NSDictionary into your AVAssetReaderTrackOutput with the appropriate parameters such that iOS will decode the AAC bitstream inside the .m4a file for you.
NSDictionary *settings = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithInt:kAudioFormatLinearPCM], AVFormatIDKey,
[NSNumber numberWithFloat:48000.0], AVSampleRateKey,
[NSNumber numberWithInt:16], AVLinearPCMBitDepthKey,
[NSNumber numberWithBool:NO], AVLinearPCMIsNonInterleaved,
[NSNumber numberWithBool:NO], AVLinearPCMIsFloatKey,
[NSNumber numberWithBool:NO], AVLinearPCMIsBigEndianKey,
nil];
AVAssetReaderOutput *assetReaderOutput =
[AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:[songAsset.tracks objectAtIndex:0]
outputSettings:settings];
i have done Real time video processing in ios using AVFoundation framework ,
help of this link.i have tested it is working fine. now i want to use h264 encoding and decoding[Before draw] .i try to get h264 encoded data from AVCaptureSession ,so i have set AVVideoCodecH264 in AVCaptureSession's videosetting,before Start capturing .
NSDictionary* videoSettings = [NSDictionary
dictionaryWithObjectsAndKeys:value,key,AVVideoCodecH264,AVVideoCodecKey,
nil];
[captureOutput setVideoSettings:videoSettings];
above code doesn't produce any change in output buffer,same buffer format get as like before. how to accomplish my requirement ? is it possible ? if so Please Help me to getting started ,h264 in ios.
There's no direct access to encoded keyframes. You may found some insightful comments about that here: https://stackoverflow.com/questions/4399162/how-to-stream-live-video-from-iphone-to-a-media-server-like-wowza
A pair AVVideoCodecH264, AVVideoCodecKey may be used as parameters in outputSettings in
NSError *error = nil;
NSDictionary *outputSettings = [NSDictionary dictionaryWithObjectsAndKeys:
AVVideoCodecH264, AVVideoCodecKey, nil];
AVAssetWriterInput *assetWriterInput = [[AVAssetWriterInput alloc]
initWithMediaType:AVMediaTypeVideo
outputSettings:outputSettings];
AVAssetWriter *assetWriter = [[AVAssetWriter alloc] initWithURL:fileURL
fileType:AVFileTypeMPEG4
error:&error];
if (!error && [assetWriter canAddInput:assetWriterInput])
[assetWriter addInput:assetWriterInput];
I need to hold some video frames from a captureSession in memory and write them to a file when 'something' happens.
Similar to this solution, i use this code to put a frame into a NSMutableArray:
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection*)connection
{
//...
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
uint8 *baseAddress = (uint8*)CVPixelBufferGetBaseAddress(imageBuffer);
NSData *rawFrame = [[NSData alloc] initWithBytes:(void*)baseAddress length:(height * bytesPerRow)];
[m_frameDataArray addObject:rawFrame];
[rawFrame release];
//...
}
And this to write the video file:
-(void)writeFramesToFile
{
//...
NSDictionary *outputSettings = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithInt:640], AVVideoWidthKey,
[NSNumber numberWithInt:480], AVVideoHeightKey,
AVVideoCodecH264, AVVideoCodecKey,
nil ];
AVAssetWriterInput *bufferAssetWriterInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:outputSettings];
AVAssetWriter *bufferAssetWriter = [[AVAssetWriter alloc]initWithURL:pathURL fileType:AVFileTypeQuickTimeMovie error:&error];
[bufferAssetWriter addInput:bufferAssetWriterInput];
[bufferAssetWriter startWriting];
[bufferAssetWriter startSessionAtSourceTime:startTime];
for (NSInteger i = 1; i < m_frameDataArray.count; i++){
NSData *rawFrame = [m_frameDataArray objectAtIndex:i];
CVImageBufferRef imgBuf = [rawFrame bytes];
[pixelBufferAdaptor appendPixelBuffer:imgBuf withPresentationTime:CMTimeMake(1,10)]; //<-- EXC_BAD_ACCESS
[rawFrame release];
}
//... (finishing video file)
}
But something is wrong with the imgBuf reference. Any suggestions? Thanks in advance.
You're supposed to lock base address before accessing imageBuffer's properties.
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer, 0);
uint8 *baseAddress = (uint8*)CVPixelBufferGetBaseAddress(imageBuffer);
NSData *rawFrame = [[NSData alloc] initWithBytes:(void*)baseAddress length:(height * bytesPerRow)];
[m_frameDataArray addObject:rawFrame];
CVPixelBufferUnlockBaseAddress(imageBuffer, 0);
This is pretty old, but to help those that come after, there's a few issues to be fixed:
Lock/unlock the base address when you copy out as suggested by Alex's answer
CVImageBufferRef is an abstract base class type. You want to use CVPixelBufferCreateWithBytes to make an instance, not just typecast the raw pixel bytes. (The system needs to know the size/format of those pixels)
You should create and store the new CVPixelBuffer directly from the original one's data instead of using an intermediary NSData for storage. That way you only have to do one copy instead of two.
I'm having problems with AVAudioSession using the AVAudioRecorder in a cocos2d game that I'm working on.
I'm trying to capture mic input using a simple AVAudioRecorder example to detect when the user makes a sound in the mic (the sound itself doesn't matter, as I'm recording into /dev/null).
Here is my setup code for the microphone:
NSURL *newURL = [[NSURL alloc] initFileURLWithPath:#"/dev/null"];
[[AVAudioSession sharedInstance] setCategory: AVAudioSessionCategoryPlayAndRecord error: nil];
NSDictionary *recordSettings =
[[NSDictionary alloc] initWithObjectsAndKeys:
[NSNumber numberWithFloat: 22050.0], AVSampleRateKey,
[NSNumber numberWithInt: kAudioFormatAppleLossless], AVFormatIDKey,
[NSNumber numberWithInt: 1], AVNumberOfChannelsKey,
[NSNumber numberWithInt: AVAudioQualityLow],
AVEncoderAudioQualityKey,
nil];
micInput = [[AVAudioRecorder alloc] initWithURL:newURL settings:recordSettings error:nil];
[newURL release];
[recordSettings release];
[micInput setMeteringEnabled:YES];
On the iPhone with the above code, the scene starts with all of the audio (sound effects, background music, etc.) playing at a really low level, because it is only playing through the phone speaker instead of the external speaker. When I test this on iPad or iPod Touch, the background audio plays through the external speaker as expected. This is a problem, since the volume of the game lowers drastically when playing on the iPhone version during this particular part of the game.
When I comment out the AVAudioSession setup line, the sounds play through the external speaker, but of course I can't get microphone input anymore. Is there any workaround or solution to this problem? I need to be able to record with AVAudioRecorder but still have audio output from the iPhone's external speaker.
Thanks!
Try something like the following after you setup your audio session:
UInt32 ASRoute = kAudioSessionOverrideAudioRoute_Speaker;
AudioSessionSetProperty (
kAudioSessionProperty_OverrideAudioRoute,
sizeof (ASRoute),
&ASRoute
);