How do I convert .pvr (PVRTC) files to .png in iphone? - iphone

I need to convert some images from pvr to a png, in run-time in iphone. I need to read them, decompress, transform some colors and then save then to pvr again or png. Any advice ?

This is apple example program that shows you how to load PVR texture files using the included PVRTexture class and then display them using OpenGL.

Do you specifically mean compressed PVRTC textures or any of the formats (e.g. 565, 1555) supported under the PVR? Also, what sort of transformations did you want to do to the colours?
The reason I ask is that, IIRC, there is code to read/manipulate PVR files on the Imagination Technologies dev web pages but if you want to change the colours of PVRTC compressed textures without actually recompressing the data entirely, there will be limits to what you can achieve. Certainly, changing the hue of regions etc will be possible, but manipulating individual pixels is likely to be too difficult.

Related

MTKTextureLoader causing banding in grayscale image

I'm trying to implement a simple LUT color grade in a metal shader. It works with a color LUT, but when the LUT is grayscale, problems crop up. First, loading the grayscale image causes an "image decoding failed" error, which is fixed with this bug workaround.
By recharacterizing the image as a texture in the asset bundle, it loads successfully, but there's banding on the output image. Sure enough, capturing a GPU frame shows that banding has been introduced in the texture:
This banding doesn't appear when doing a Quick Look in the asset bundle, or on the source PNG. Inspecting the texture's pixel format shows that it's been encoded as ASTC_4x4_sRGB, which Apple documentation states is a compressed format for low-dynamic range content. It seems as though this compression may be responsible for degrading the LUT texture. Normally when working with LUTs, I take care to avoid any compression, but I can't find a way to disable compression or force a pixel format in MTKTextureLoader.
I've also tried various MTKTextureLoader options, including enabling/disabling sRGB, mipmaps, etc.
Any ideas on how to fix the banding?
It's important to understand that when using MTKTextureLoader with texture assets in an asset catalog, most runtime texture loader options are ignored. This may not be documented, but it is currently the case.
You may be able to avoid this automatic compression (which is well-intentioned but both clumsy and too aggressive) by selecting your asset in the Xcode asset catalog editor and setting its Pixel Format explicitly to something like "8 Bit Normalized - RGBA", which maps to .rgba8Unorm at runtime.

Uploading dynamic textures fast in Unity 3D

I receive jpeg compressed video frames over network in every 30 frames. But I have a low power mobile device and it seems to lag a lot if I upload with the following lines.
Texture2D tex;
tex.LoadImage(MyUDPReceiver.Instance.data_JPG);
Are there any more efficient ways to solve this problem?
You should not use JPEG or PNG images as their decoding is very slow. These textures are also decoded to uncompressed and use a lot of ram.
You should use ETC1 textures, of if you need the alpha channel, DXT5. Note that DXT5 is not supported everywhere so you might also need to support a different type of texture for this (PVRTC?).
There is tex.LoadImageRaw for this, to use it you will need to parse the header for width/height values (just a simple struct).

how to texture of images?

I am using too many fruits and vegetables images in iphone game, so how can I make texture of such images, because as per I read that every image in texture in OpenGL ES, so what should I do?
I have developed 6 iOS application, but this is my first game, so please guide me in proper way so that I can get idea.
You use glTexImage2D to upload raw pixel data to OpenGL in order to populate a texture. You can use Core Graphics and particularly CGBitmapContextCreate to get the raw pixel data to get the raw pixel data of (or convert to raw pixel data) anything else Core Graphics can draw — which for you probably means a CGImageRef, either through a C API load of a PNG or JPG, or just using the result of [someUIImage CGImage].
Apple's GLSprite sample (you'll need to be logged in, and I'm not sure those links work externally, but do a search in the Developer Library if necessary) is probably a good starting point. I'm not 100% behind the class structure, but if you look into EAGLView.m, lines 272 to 305, the code there loads a PNG from disk then does the necessary steps to post it off to OpenGL, with a decent amount of commenting.

Why there is .pvr file in OpenGL(IOS)

I am making application with OpenGL in IOS using PVR texture for making 3D effect.I couldn't understand about .pvr files.So please friends would you give idea about .pvr files and what's importance of it in OpenGL and how can i make it?
PVR file is a container for various texture format such as PVRTC, RGB565 and so forth. You can use directly these texture formats as is. If you use PNG, pixels might be pre-multiplied alpha.
PVRTC is compressed texture format that is natively supported by GPU (PowerVR MBX or SGX). GPU can render PVRTC effectively. It would increase framerate.
PVR Textures and Memory
Using texturetool to Compress Textures
PVRTexTool
They are compressed texture files. You can convert more common formats into it using texturetool that comes with Xcode. Compressed textures save bandwidth, loading times and memory and speed up your application because they are compressed also in the video memory. They can also contain mipmaps.

jpeg to png conversion

I am working on images in iPhone. There are lots of jpeg images which range from 35kb to 50kb. I may need to transfer this over internet which comes around 6 mb. I tried to change a 35kb jpeg image to png. The actual size got increase jpeg was 56.1kb and png is 576 kb. I used mspaint to change the format. jpeg to png should actually decrease the size of the image right ? If no is that ideal to have jpeg files on iphone or only png like typical mobile applications have ?
JPEG and PNG are very different file formats; any given image that is smaller in one may not be smaller in another. And furthermore, their quality is not directly comparable.
For example, photographic content is very well represented in JPEG. The subdivision-of-blocks composed with pattern recognition makes for a format that does a very good job of discarding visual information in a way that human eyes do not easily notice. Of course, a highly-compressed JPEG may throw away too much information and show the blocks and instantly break the illusion of photographic reality, but used carefully, JPEG is fantastic for photos of the 'real world'.
And computer-generated content is very well represented in PNG. The lossless encoding is great for showing the straight lines of standard computer-generated displays, and naively-created gradients are replicated exactly with PNG. Had JPEG been used for either straight lines or naive gradients, the shortcomings would stand out instantly. Also, because PNG can be palette-based, it can very efficiently store images with only a few dozen colors.
So, pick the file format based on its use: JPEG for photos of reality or for very good approximations of reality, and PNG for computer-generated content.
PNG files are usually smaller if their contents are graphical and contain a lot of evenly colored shapes. For photos or scans jpeg files are way smaller, since they use a much more sophisticated, yet lossy, algorithm for compression.
For your iPhone project you should use whatever is smaller, in your case jpeg.