Does anyone know if any JPEG compression library that produces decent image quality has been ported to the iPhone? The built-in algorithm inside UIImageJPEGRepresentation produces huge files (compared to the quality), which makes uploading images from the phone over the network much slower than necessary. I can compress a JPG compressed inside the iPhone to one tenths of the file size using GD built into PHP, without significant loss of quality...
Well. The GD library uses the iJPEG library for compression. So if you want the same quality you should use the same library:
http://www.ijg.org/
It's the most commonly used jpeg compression/decompression library btw.
I would like to know if there is a better library as well. I'm using .Net to compress jpeg images on the server and it does a much better job than the iPhone's UIImageJPEGRepresentation. I'd like to get them as small as possible on the iPhone before uploading as it a dreadfully slow process.
Related
I have read the following here:
When you use a Texture compression format that is not supported on the
target platform, the Textures are decompressed to RGBA 32 and stored
in memory alongside the compressed Textures. When this happens, time
is lost decompressing Textures, and memory is lost because you are
storing them twice. In addition, all platforms have different
hardware, and are optimised to work most efficiently with specific
compression formats; choosing non-compatible formats can impact your
game’s performance. The table below shows supported platforms for each
compression format.
Let's discuss a specific case. Say I stored a .png file on a disc and packaged my game from Android. Now I play that game on an Android device whose GPU requires ETC2 as native texture format. Am I correct that when I enter the game the following should happen:
Read PNG file from disk to RAM (RAM is used for storing PNG file data)
Decompress PNG to RGBA32 (RAM is used for both PNG and for decompressed data)
Compress RGBA32 to ETC2 and upload to GPU (on RAM if I have a texture cash, then I might deallocate memory for PNG file data but I need to store RGBA32 for future reuse or at least I need to store ETC2)
This means I am doing lots of conversions between PNG->RBGA32->ETC2 and during that conversions I not only use CPU resource but also significantly utilize RAM. My question - did I correctly understood what happens when one does not package with native texture formats for targeted platform?
Yes, you kind of correctly understood what's going on here. However you misunderstood something: The way PNG relates to all of this. The compression methods implemented by GPUs to be applied on textures are very different to the filter+deflate method of PNG, so with every kind of GPU you have this kind of behavior.
What the Unity devs are trying to tell you is, that textures can be stored in the very format the GPU works with, and that for optimal performance you should identify which compression formats are supported on your target platform and bundle your asset file for that.
So for a game for platform X identify the compression formats supported by X GPUs, then pack your assets with that and ship the X version with that. Rinse and repeat for other platforms.
I am currently working on an iOS game and the image resources seem to be a little too much. I heard of webP and wanted to know more about it.
I did some researches on webP and know that this new format requires much less space than PNG and its encoding/decoding speed is fast. But I found no article discussing the GPU burden when using WebP pictures, compared to PNG ones.
Is there any article out there on this topic?
Or can I do the experiment myself? I am coding in VS using cocos2d-x. I don't know what to do if I want to simulate an iOS GPU and monitor its memory usage.
Many thanks!
You can assume that the textures generated remain the same, ie render at the same speed, using the same amount of memory.
If you want faster loading and rendering and less memory usage, use the .pvr.ccz format.
I am developing application related to meditation.
In that there is lots of high quality .mp3 files, that's why application size increases much more nearer to 2GB. I have searched for compression but i can't found it for audio files, but xcode supports only for .png file.
If anyone know anything about it then please suggest me.
Thanks in advance,
MP3 is lossy compression. If you are willing to accept lossy files, MP3 is much better compression than you will get with a regular data compression algorithm, such as gzip.
What I would suggest would be getting the original files and re-compressing with AAC, down to a bitrate that makes the size of your application acceptable. AAC sounds better at lower bitrates, and compressing from the original means you won't have any artifacts from the already lossy-compressed MP3 files.
I would like to apply image processing on pictures taken on the iPhone.
This processing would involve 2D matrix convolutions etc.
I'm afraid that the performance with nested NSArrays would be pretty bad. What is the right way to manipulate pixel based images? Should I simply use C arrays allocated with malloc?
Have you looked at the Quartz 2D engine available in the iPhone SDK? Or perhaps Core Graphics? Apple has a nice overview document describing all the different imaging technologies available on the iPhone. Unfortunately there isn't anything as nice as ImageKit on the iPhone yet.
I suggest to use OpenCV image processing library since it contains well optimized algorithms almost for anything you want. OpenCV will be definitely faster than using manual processing with NSArray.
But there are one major drawback - OpenCV library is written on C/C++, so you will have to convert your NSImage to native OpenCV image format to do processing. But it's really easy to google how to do this.
I use OpenCV in my own iPhone project, here is small how-to post of building OpenCv for IPhone: http://computer-vision-talks.com/2010/12/building-opencv-for-ios/
Yes, you would use a C array since that's how you get back the pixel data anyway.
As mentioned, you should look and see if you can use Quartz2D to do the manipulations you are interested in as it would probably perform better being hardware based. If not, just do your own over the array of pixels.
The iPhone also supports OpenCL, and it's GPU has way more processing power than the CPU.
I want to start with an audio file of a modest filesize, and finish with an array of unsigned chars that can be loaded into OpenAL with alBufferData. My trouble is the steps that happen in the middle.
I thought AAC would be the way to go, but according to Apple representative Rincewind (circa 12/08):
Currently hardware assisted compression formats are not supported for decode on iPhone OS. These formats are AAC, MP3 and ALAC.
Using ExtAudioFile with a client format set generates PERM errors, so he's not making things up.
So, brave knowledge-havers, what are my options here? Package the app with .wav's and just suck up having a massive download? Write my own decoder?
Any links to resources or advice you might have would be greatly appreciated.
Offline rendering of compressed audio is now possible, see QA1562.
While Vorbis and the others suggested are good, they can be fairly slow on the iPhone as there is no hardware acceleration.
One codec that is natively supported (but has only a 4:1 compression ratio) is ADPCM, aka ima4. It's handled through the ExtAudioFile interface and is only the tiniest bit slower than loading .wav's directly.
There are some good open source audio decoding libraries that you could use:
mpg123
FAAC
Both are licensed under LGPL, meaning you can use them in closed source applications provided modifications to the library, if any, are open sourced.
You could always make your wave files mono and hence cut your wave file size in half. But that might not be the best alternative for you
Another option for doing your own decoding would be Ogg Vorbis. There's even a low-memory version of their library for integer processors called "Tremor".