iPhone original render faster than iPhone 3GS? - iphone

I'm working on a game on iPhone, which uses C++ and OpenGL ES 1.x library.
It works fine on simulator. But when I install it on real iPhone, I found out that on iPhone original, it took about 20 milliseconds to render a frame. However, it took 35~40 milliseconds to render a frame on iPhone 3GS.
I've tried various OS, including 3GS + iOS 3.1.2, 3G + iOS 4.0, 3GS + iOS 4.1, iPad + iOS 3.2. All of them render much slower than iPhone original, which sounds really ridiculous to me. I tried google for anything I can think of, fixing every problem it might be related to, but nothing changed.
I have 2 machine which these pieces of code render faster: 1) iPhone original with iOS 3.1.3, 2) iPod Touch with iOS 3.1.3. Both took about 20 milliseconds to render a frame.
And 4 machine which render mysteriously slower: 1) iPhone 3G with iOS 4.0, 2) iPhone 3GS with iOS 3.1.2, 3) iPhone 3GS with iOS 4.1, 4) iPad with iOS 3.2. iPhone took about 35-40 milliseconds to render a frame and iPad took around 25.
I use PVRTC for texture, which is first cooked and make into a bundle. It uses total of ten 512x512 textures, three 1024x1024 textures.
The piece of code which binding texture is as follow:
GLenum internalFormat = 0;
GLenum pixelType = 0;
// resolve type
ResetFlags_();
assert(2==attr.Dimension && 1==attr.Depth);
switch (attr.Format)
{
case FORMAT_PVRTC2:
assert(attr.Width==attr.Height);
if (attr.AlphaBits>0)
internalFormat = GL_COMPRESSED_RGBA_PVRTC_2BPPV1_IMG;
else
internalFormat = GL_COMPRESSED_RGB_PVRTC_2BPPV1_IMG;
break;
case FORMAT_PVRTC4:
assert(attr.Width==attr.Height);
if (attr.AlphaBits>0)
internalFormat = GL_COMPRESSED_RGBA_PVRTC_4BPPV1_IMG;
else
internalFormat = GL_COMPRESSED_RGB_PVRTC_4BPPV1_IMG;
break;
... other formats ...
}
// prepare temp buffer to load
MemoryBuffer tmpBuffer(true);
uint8* buffer = tmpBuffer.GetWritePtr(attr.TextureSize);
// read data
stream.Read(buffer, attr.TextureSize);
if (stream.Fail())
return false;
// init
width_ = attr.Width;
height_ = attr.Height;
LODs_ = attr.LODs;
alphaBits_ = attr.AlphaBits;
// create and upload texture
glGenTextures(1, &glTexture_);
glBindTexture(GL_TEXTURE_2D, glTexture_);
uint32 offset = 0;
uint32 dim = width_; // = height
uint32 w, h;
switch (internalFormat)
{
case GL_COMPRESSED_RGBA_PVRTC_2BPPV1_IMG:
case GL_COMPRESSED_RGB_PVRTC_2BPPV1_IMG:
case GL_COMPRESSED_RGBA_PVRTC_4BPPV1_IMG:
case GL_COMPRESSED_RGB_PVRTC_4BPPV1_IMG:
for (uint32 i=0; i<LODs_; ++i) {
assert(offset<attr.TextureSize);
w = dim >> ((FORMAT_PVRTC2==attr.Format) ? 3:2);
h = dim >> 2;
// Clamp to minimum number of blocks
if (w<2) w = 2;
if (h<2) h = 2;
uint32 const image_size = w * h * 8; // 8 bytes for each block
glCompressedTexImage2D(GL_TEXTURE_2D, i, internalFormat, dim, dim, 0, image_size, buffer+offset);
dim >>= 1;
offset += image_size;
break;
... other formats ...
}
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST_MIPMAP_NEAREST); // tri-linear?
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
SetContext_(&glTexture_);
return true;
Rendering part is huge because it uses an engine developed by others. As far as I can tell, it uses glDrawArrays and no shader was used.
Anyone had encounter the same problem before? I really can't see why iPhone original render much faster than iPhone 3GS.
p.s. I forgot to say. I draw only 2D rectangles with textures only. And it's around 20 rectangles in my game ( one background and one UI with 480x360 size. Others are commonly 64x64 units.)

The behaviour you are getting could be because of the possible emulation of Fixed Function Pipeline (FFP) via Programmable Pipeline (i.e. shaders).
Can you please execute a test that will load and display your textures in some way, completely without your engine.

Related

Using C++ AMP with Direct2D

Is it possible to use a texture generated by C++ AMP as a screen buffer?
I would like to generate an image with my C++ AMP code (already done) and use this image to fill the entire screen of Windows 8 metro app. The image is updated 60 times per second.
I'm not at all fluent in Direct3D. I used Direct2d template app as a starting point.
First I tried to manipulate the buffer from swap chain in the C++ AMP code directly, but any attempt to write to that texture caused an error.
Processing data with AMP on GPU, then moving it to CPU memory to create a bitmap that I can use in D2D API seems way inefficient.
Can somebody share a piece of code that would allow me to manipulate swap chain buffer texture with C++ AMP directly (without data leaving the GPU) or at least populate that buffer with data from another texture that doesn't leave the GPU?
You can interop between an AMP Texture<> and a ID3D11Texture2D buffer. The complete code and other examples of interop can be found in the Chapter 11 samples here.
// Get a D3D texture resource from an AMP texture.
texture<int, 2> text(100, 100);
CComPtr<ID3D11Texture2D> texture;
IUnknown* unkRes = get_texture(text);
hr = unkRes->QueryInterface(__uuidof(ID3D11Texture2D),
reinterpret_cast<LPVOID*>(&texture));
assert(SUCCEEDED(hr));
// Create a texture from a D3D texture resource
const int height = 100;
const int width = 100;
D3D11_TEXTURE2D_DESC desc;
ZeroMemory(&desc, sizeof(desc));
desc.Height = height;
desc.Width = width;
desc.MipLevels = 1;
desc.ArraySize = 1;
desc.Format = DXGI_FORMAT_R8G8B8A8_UINT;
desc.SampleDesc.Count = 1;
desc.SampleDesc.Quality = 0;
desc.Usage = D3D11_USAGE_DEFAULT;
desc.BindFlags = D3D11_BIND_UNORDERED_ACCESS | D3D11_BIND_SHADER_RESOURCE;
desc.CPUAccessFlags = 0;
desc.MiscFlags = 0;
CComPtr<ID3D11Texture2D> dxTexture = nullptr;
hr = device->CreateTexture2D(&desc, nullptr, &dxTexture);
assert(SUCCEEDED(hr));
texture<uint4, 2> ampTexture = make_texture<uint4, 2>(dxView, dxTexture);

Quartz 2d Drawing: Perfect on simulator / bad on device. Distribution Vs Debug

I've just finished waveform drawing code for my app. I'm pretty happy with it and on the simulator it looks great.
The problem I have is when I run it on an ipad it doesnt draw properly. On the simulator the drawing looks like a nice regular waveform drawing whereas on the ipad the waveform just looks like one big rectangle.
I'm very unsure how I could even begin to start trouble shooting and resolving something like this.
Can you offer any suggestions as to why its working on the simulator & not the ipad?
If I can submit anymore information that might help please let me know.
calculation
-(void) plotwaveform:(AudioSourceOBJ )source
{
int count =source->framecount;
int blocksize= count/resolution;
currentmaxvalue=0;
int readindex=0;
CGRect *addrects= malloc(resolution * sizeof(CGRect));
float *heights=malloc(resolution * sizeof(float));
for (int i=0; i<resolution;i++) {
AudioUnitSampleType *blockofaudio;
blockofaudio =malloc(blocksize * sizeof(AudioUnitSampleType));
memcpy(blockofaudio, &source->leftoutput[readindex],(blocksize * sizeof(AudioUnitSampleType)));
float sample= [self getRMS:blockofaudio blocksize:blocksize];
heights[i]=sample;
readindex+=blocksize;
}
for (int scale=0; scale<resolution; scale++) {
float h= heights[scale];
h= (h/currentmaxvalue)* 45;
addrects[scale]=CGRectMake(scale, 0, 1, h);
}
if (waveform) {
[waveform release];
[waveform removeFromSuperview];
waveform=nil;
}
CGMutablePathRef halfpath=CGPathCreateMutable();
CGPathAddRects(halfpath, NULL, addrects, resolution);
CGMutablePathRef path= CGPathCreateMutable();
CGAffineTransform xf = CGAffineTransformIdentity;
xf= CGAffineTransformTranslate(xf, 0.0,45);
CGPathAddPath(path,&xf, halfpath);
xf= CGAffineTransformIdentity;
xf= CGAffineTransformTranslate(xf, 0.0, 45);
xf=CGAffineTransformScale(xf, 1.0, -1);
CGPathAddPath(path, &xf, halfpath);
CGPathRelease(halfpath);
free(addrects);
waveform = [[Waveform alloc] initWithFrameAndPlotdata:CGRectMake(0, 0, 400,90) thepoints:path];
[self.view addSubview:waveform];
}
-(float ) getRMS:(AudioUnitSampleType *)blockofaudio blocksize:(int)blocksize
{
float output;
float sqsummed;
float sqrootofsum;
float val;
for (int i=0;i<blocksize; i++) {
val= blockofaudio[i];
sqsummed+= val* val;
}
sqrootofsum=sqsummed / blocksize;
output = sqrt(sqrootofsum);
// find the max
if(output> currentmaxvalue)
{
currentmaxvalue=output;
}
return output;
}
Drawing
- (void)drawRect:(CGRect)rect
{
CGContextRef ctx= UIGraphicsGetCurrentContext();
CGContextSetRGBFillColor(ctx, 0, 0, 0, .5);
CGContextBeginPath(ctx);
CGContextAddPath(ctx, mutatablepath);
//CGContextStrokePath(ctx);
CGContextFillPath(ctx);
CFRelease(mutatablepath);
}
DESC EDIT
I pass a bunch of audio data to the plotwaveform function and divide it into chunks. For each chunk of audio I calculate the RMS for each chunk and keep a track of the maximum value. When all that is done I use the max value to scale my rms values to fit my view port.
I have noticed a strange thing. If I NSLog the values for the "output" variable in the getRMS function the waveform draws fine on the device. If I do not NSLog the values the waveform does not draw properly?!?
That to me is bizarre.
One major error I see is that you never initialize sqsummed inside the getRMS:blocksize: method, so its initial value is garbage. What the garbage happens to be depends on the details of the surrounding code, how the compiler allocates registers for variables, and so on. Adding an NSLog statement could well change what the garbage is next time around the loop.
If the garbage happens to always correspond to a very small float value you'll get expected behavior, while if it happens to always correspond to some extremely large float value (large enough to swamp the actual samples) you'll get one big rectangle, while if it happens to vary you'll get a noise-like output.
In any case, please remember that the simulator has your entire mac ram and cpu power to work with. Process capacity is sadly not emulated in the iphone/ipad simulator.

EXC_BAD_ACCESS when calling avcodec_encode_video

I have an Objective-C class (although I don't believe this is anything Obj-C specific) that I am using to write a video out to disk from a series of CGImages. (The code I am using at the top to get the pixel data comes right from Apple: http://developer.apple.com/mac/library/qa/qa2007/qa1509.html). I successfully create the codec and context - everything is going fine until it gets to avcodec_encode_video, when I get EXC_BAD_ACCESS. I think this should be a simple fix, but I just can't figure out where I am going wrong.
I took out some error checking for succinctness. 'c' is an AVCodecContext*, which is created successfully.
-(void)addFrame:(CGImageRef)img
{
CFDataRef bitmapData = CGDataProviderCopyData(CGImageGetDataProvider(img));
long dataLength = CFDataGetLength(bitmapData);
uint8_t* picture_buff = (uint8_t*)malloc(dataLength);
CFDataGetBytes(bitmapData, CFRangeMake(0, dataLength), picture_buff);
AVFrame *picture = avcodec_alloc_frame();
avpicture_fill((AVPicture*)picture, picture_buff, c->pix_fmt, c->width, c->height);
int outbuf_size = avpicture_get_size(c->pix_fmt, c->width, c->height);
uint8_t *outbuf = (uint8_t*)av_malloc(outbuf_size);
out_size = avcodec_encode_video(c, outbuf, outbuf_size, picture); // ERROR occurs here
printf("encoding frame %3d (size=%5d)\n", i, out_size);
fwrite(outbuf, 1, out_size, f);
CFRelease(bitmapData);
free(picture_buff);
free(outbuf);
av_free(picture);
i++;
}
I have stepped through it dozens of times. Here are some numbers...
dataLength = 408960
picture_buff = 0x5c85000
picture->data[0] = 0x5c85000 -- which I take to mean that avpicture_fill worked...
outbuf_size = 408960
and then I get EXC_BAD_ACCESS at avcodec_encode_video. Not sure if it's relevant, but most of this code comes from api-example.c. I am using XCode, compiling for armv6/armv7 on Snow Leopard.
Thanks so much in advance for help!
I have not enough information here to point to the exact error, but I think that the problem is that the input picture contains less data than avcodec_encode_video() expects:
avpicture_fill() only sets some pointers and numeric values in the AVFrame structure. It does not copy anything, and does not check whether the buffer is large enough (and it cannot, since the buffer size is not passed to it). It does something like this (copied from ffmpeg source):
size = picture->linesize[0] * height;
picture->data[0] = ptr;
picture->data[1] = picture->data[0] + size;
picture->data[2] = picture->data[1] + size2;
picture->data[3] = picture->data[1] + size2 + size2;
Note that the width and height is passed from the variable "c" (the AVCodecContext, I assume), so it may be larger than the actual size of the input frame.
It is also possible that the width/height is good, but the pixel format of the input frame is different from what is passed to avpicture_fill(). (note that the pixel format also comes from the AVCodecContext, which may differ from the input). For example, if c->pix_fmt is RGBA and the input buffer is in YUV420 format (or, more likely for iPhone, a biplanar YCbCr), then the size of the input buffer is width*height*1.5, but avpicture_fill() expects the size of width*height*4.
So checking the input/output geometry and pixel formats should lead you to the cause of the error. If it does not help, I suggest that you should try to compile for i386 first. It is tricky to compile FFMPEG for the iPhone properly.
Does the codec you are encoding support the RGB color space? You may need to use libswscale to convert to I420 before encoding. What codec are you using? Can you post the code where you initialize your codec context?
The function RGBtoYUV420P may help you.
http://www.mail-archive.com/libav-user#mplayerhq.hu/msg03956.html

iPhone audio analysis

I'm looking into developing an iPhone app that will potentially involve a "simple" analysis of audio it is receiving from the standard phone mic. Specifically, I am interested in the highs and lows the mic pics up, and really everything in between is irrelevant to me. Is there an app that does this already (just so I can see what its capable of)? And where should I look to get started on such code? Thanks for your help.
Look in the Audio Queue framework. This is what I use to get a high water mark:
AudioQueueRef audioQueue; // Imagine this is correctly set up
UInt32 dataSize = sizeof(AudioQueueLevelMeterState) * recordFormat.mChannelsPerFrame;
AudioQueueLevelMeterState *levels = (AudioQueueLevelMeterState*)malloc(dataSize);
float channelAvg = 0;
OSStatus rc = AudioQueueGetProperty(audioQueue, kAudioQueueProperty_CurrentLevelMeter, levels, &dataSize);
if (rc) {
NSLog(#"AudioQueueGetProperty(CurrentLevelMeter) returned %#", rc);
} else {
for (int i = 0; i < recordFormat.mChannelsPerFrame; i++) {
channelAvg += levels[i].mPeakPower;
}
}
free(levels);
// This works because one channel always has an mAveragePower of 0.
return channelAvg;
You can get peak power in either dB Free Scale (with kAudioQueueProperty_CurrentLevelMeterDB) or simply as a float in the interval [0.0, 1.0] (with kAudioQueueProperty_CurrentLevelMeter).
Don't forget to activate level metering for AudioQueue first:
UInt32 d = 1;
OSStatus status = AudioQueueSetProperty(mQueue, kAudioQueueProperty_EnableLevelMetering, &d, sizeof(UInt32));
Check the 'SpeakHere' sample code. it will show you how to record audio using the AudioQueue API. It also contains some code to analyze the audio realtime to show a level meter.
You might actually be able to use most of that level meter code to respond to 'highs' and 'lows'.
The AurioTouch example code performs Fourier analysis
on the mic input. Could be a good starting point:
https://developer.apple.com/iPhone/library/samplecode/aurioTouch/index.html
Probably overkill for your application.

Playing generated audio on an iPhone

As a throwaway project for the iPhone to get me up to speed with Objective C and the iPhone libraries, I've been trying to create an app that will play different kinds of random noise.
I've been constructing the noise as an array of floats normalized from [-1,1].
Where I'm stuck is in playing that generated data. It seems like this should be fairly simple, but I've looked into using AudioUnit and AVAudioPlayer, and neither of these seem optimal.
AudioUnit requires apparently a few hundred lines of code to do even this simple task, and AVAudioPlayer seems to require me to convert the audio into something CoreAudio can understand (as best I can tell, that means LPCM put into a WAV file).
Am I overlooking something, or are these really the best ways to play some sound data stored in array form?
Here's some code to use AudioQueue, which I've modified from the SpeakHere example. I kind of pasted the good parts, so there may be something dangling here or there, but this should be a good start if you want to use this approach:
AudioStreamBasicDescription format;
memset(&format, 0, sizeof(format));
format.mSampleRate = 44100;
format.mFormatID = kAudioFormatLinearPCM;
format.mFormatFlags = kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsPacked;
format.mChannelsPerFrame = 1;
format.mBitsPerChannel = 16;
format.mBytesPerFrame = (format.mBitsPerChannel / 8) * format.mChannelsPerFrame;
format.mFramesPerPacket = 1;
format.mBytesPerPacket = format.mBytesPerFrame * format.mFramesPerPacket;
AudioQueueRef queue;
AudioQueueNewOutput(&format,
AQPlayer::AQOutputCallback,
this, // opaque reference to whatever you like
CFRunLoopGetCurrent(),
kCFRunLoopCommonModes,
0,
&queue);
const int bufferSize = 0xA000; // 48K - around 1/2 sec of 44kHz 16 bit mono PCM
for (int i = 0; i < kNumberBuffers; ++i)
AudioQueueAllocateBufferWithPacketDescriptions(queue, bufferSize, 0, &mBuffers[i]);
AudioQueueSetParameter(queue, kAudioQueueParam_Volume, 1.0);
UInt32 category = kAudioSessionCategory_MediaPlayback;
AudioSessionSetProperty(kAudioSessionProperty_AudioCategory, sizeof(category), &category);
AudioSessionSetActive(true);
// prime the queue with some data before starting
for (int i = 0; i < kNumberBuffers; ++i)
OutputCallback(queue, mBuffers[i]);
AudioQueueStart(queue, NULL);
The code above refers to this output callback. Each time this callback executes, fill the buffer passed in with your generated audio. Here, I'm filling it with random noise.
void OutputCallback(void* inUserData, AudioQueueRef inAQ, AudioQueueBufferRef inCompleteAQBuffer) {
// Fill
//AQPlayer* that = (AQPlayer*) inUserData;
inCompleteAQBuffer->mAudioDataByteSize = next->mAudioDataBytesCapacity;
for (int i = 0; i < inCompleteAQBuffer->mAudioDataByteSize; ++i)
next->mAudioData[i] = rand();
AudioQueueEnqueueBuffer(queue, inCompleteAQBuffer, 0, NULL);
}
It sounds like you're coming from a platform that had a simple built in tone generator. The iPhone doesn't have anything like that. It's easier to play simple sounds from sound files. AudioUnit is for actually processing and generating real music.
So, yes, you do need an audio file to play a sound simply.