GPU usage of webp on iphone, compared to png - iphone

I am currently working on an iOS game and the image resources seem to be a little too much. I heard of webP and wanted to know more about it.
I did some researches on webP and know that this new format requires much less space than PNG and its encoding/decoding speed is fast. But I found no article discussing the GPU burden when using WebP pictures, compared to PNG ones.
Is there any article out there on this topic?
Or can I do the experiment myself? I am coding in VS using cocos2d-x. I don't know what to do if I want to simulate an iOS GPU and monitor its memory usage.
Many thanks!

You can assume that the textures generated remain the same, ie render at the same speed, using the same amount of memory.
If you want faster loading and rendering and less memory usage, use the .pvr.ccz format.

Related

Tips on getting best GPU performance while using PNG Texture Atlas?

Hello I'm a new developer, I have a spider solitaire app on the App Store. When I profile my app's GPU usage the profiler tells me that its usage is "very high." I've tried to cobble a solution together by researching, off and on, for months on the internet, but haven't found it yet.
The spider solitaire variant of solitaire uses 104 cards which could potentially be on the scene at the same time and they are all PNGs. I'm pretty sure, from researching, that the pngs are the reason for the performance hit.
The reason I want to use PNGs over less resource intensive file formats is to replicate the rounded corners of a playing card. I've tried using shape masks but that just made performance worse. Other than that, I have seen reference to "tiling" on SO, but haven't been able to parse that information.
I've sized the card images correctly. I'm getting 99%, or so, 60fps in my game, but of course, I am concerned about the energy consumption. I should mention that the game is 2D SpriteKit.
Is this just a "suck it up, you made a design decision that is costly" type situation? Or is there a better way? Is caching a thing, weak references maybe? Please, be nice, I'm not very experienced with this, but I love it.
Thank you very much to anyone who can provide any resources!

Unity - native texture formats explained

I have read the following here:
When you use a Texture compression format that is not supported on the
target platform, the Textures are decompressed to RGBA 32 and stored
in memory alongside the compressed Textures. When this happens, time
is lost decompressing Textures, and memory is lost because you are
storing them twice. In addition, all platforms have different
hardware, and are optimised to work most efficiently with specific
compression formats; choosing non-compatible formats can impact your
game’s performance. The table below shows supported platforms for each
compression format.
Let's discuss a specific case. Say I stored a .png file on a disc and packaged my game from Android. Now I play that game on an Android device whose GPU requires ETC2 as native texture format. Am I correct that when I enter the game the following should happen:
Read PNG file from disk to RAM (RAM is used for storing PNG file data)
Decompress PNG to RGBA32 (RAM is used for both PNG and for decompressed data)
Compress RGBA32 to ETC2 and upload to GPU (on RAM if I have a texture cash, then I might deallocate memory for PNG file data but I need to store RGBA32 for future reuse or at least I need to store ETC2)
This means I am doing lots of conversions between PNG->RBGA32->ETC2 and during that conversions I not only use CPU resource but also significantly utilize RAM. My question - did I correctly understood what happens when one does not package with native texture formats for targeted platform?
Yes, you kind of correctly understood what's going on here. However you misunderstood something: The way PNG relates to all of this. The compression methods implemented by GPUs to be applied on textures are very different to the filter+deflate method of PNG, so with every kind of GPU you have this kind of behavior.
What the Unity devs are trying to tell you is, that textures can be stored in the very format the GPU works with, and that for optimal performance you should identify which compression formats are supported on your target platform and bundle your asset file for that.
So for a game for platform X identify the compression formats supported by X GPUs, then pack your assets with that and ship the X version with that. Rinse and repeat for other platforms.

Text to speech app size

Does using libraries like OpenEars will drastically enlarge my app size? Or I can just extract the text to speech stuff, and get away with it...Probably removing all those langs? I don't have any idea.
I checked and OpenEars sample app is 33MB - which is big!
So my question is - can I implement text to speech in my app without compromising the size so much? I mean I can live with 2-3 MBs but 30...
Thank you!
OpenEars developer here. Just follow the instructions here to reduce your final app size, there's no need to ship all the voices or any features of the framework that you aren't making use of. Depending on which voices you're using and which feature, you might see an app size increase of between 6 megs and ~20 unless you're using a large number of the available voices. The sample app uses all of the framework features and two voices, and supports a few iOS versions, so its binary size reflects that.
My guess is you can't, audio will take up a lot of space.
Removing unneeded language will free some space but 2-3 mb for all the audio guess that isen't really possible.

Possible to get more than 20K+ triangles at 35fps on iPhone 3GS?

I'm programming a new engine for iOS and I'm at a point where I can test how much power I can get out of my engine.
My code is written in C++ and the engine is written in a highly efficient manner to do streaming, batch rendering, frustum culling, occlusion culling, fast memory managers, etc. However, the results don't satisfy my expectations and I'm wondering if anyone has been able to get more out of their iPhone device.
Right now I'm rendering only the geometry with textures and the best I get is about 20K+ triangles being rendered at ~35fps on my iPhone 3GS.
Is this somehow the maximum iPhone 3GS can do? Or has anyone done better?
P.S. I'm doing no triangle strips yet, so i know there is about ~5fps improvement in there.
As far as knowing the maximum possible performance of the 3GS take a look here:
http://www.glbenchmark.com/phonedetails.jsp?benchmark=glpro11&D=Apple%20iPhone%203G%20S&testgroup=lowlevel
well, i did more research on this. i was already aware of 7M t/s but that's just a number not taking triangle filling in account.
so to make sure there wasn't a big bottleneck in my code i downloaded the Oolong engine and did some comparison and the speed was fairly the same.
(core animation results)
Oolong engine(running the San Angeles demo):
5k to 14k #~60
20k to 25k #~40
25k to 30k #~30
I'm getting very much the same results in terms of speed.

Image processing on the iPhone

I would like to apply image processing on pictures taken on the iPhone.
This processing would involve 2D matrix convolutions etc.
I'm afraid that the performance with nested NSArrays would be pretty bad. What is the right way to manipulate pixel based images? Should I simply use C arrays allocated with malloc?
Have you looked at the Quartz 2D engine available in the iPhone SDK? Or perhaps Core Graphics? Apple has a nice overview document describing all the different imaging technologies available on the iPhone. Unfortunately there isn't anything as nice as ImageKit on the iPhone yet.
I suggest to use OpenCV image processing library since it contains well optimized algorithms almost for anything you want. OpenCV will be definitely faster than using manual processing with NSArray.
But there are one major drawback - OpenCV library is written on C/C++, so you will have to convert your NSImage to native OpenCV image format to do processing. But it's really easy to google how to do this.
I use OpenCV in my own iPhone project, here is small how-to post of building OpenCv for IPhone: http://computer-vision-talks.com/2010/12/building-opencv-for-ios/
Yes, you would use a C array since that's how you get back the pixel data anyway.
As mentioned, you should look and see if you can use Quartz2D to do the manipulations you are interested in as it would probably perform better being hardware based. If not, just do your own over the array of pixels.
The iPhone also supports OpenCL, and it's GPU has way more processing power than the CPU.