Uploading dynamic textures fast in Unity 3D - unity3d

I receive jpeg compressed video frames over network in every 30 frames. But I have a low power mobile device and it seems to lag a lot if I upload with the following lines.
Texture2D tex;
tex.LoadImage(MyUDPReceiver.Instance.data_JPG);
Are there any more efficient ways to solve this problem?

You should not use JPEG or PNG images as their decoding is very slow. These textures are also decoded to uncompressed and use a lot of ram.
You should use ETC1 textures, of if you need the alpha channel, DXT5. Note that DXT5 is not supported everywhere so you might also need to support a different type of texture for this (PVRTC?).
There is tex.LoadImageRaw for this, to use it you will need to parse the header for width/height values (just a simple struct).

Related

How do I convert .pvr (PVRTC) files to .png in iphone?

I need to convert some images from pvr to a png, in run-time in iphone. I need to read them, decompress, transform some colors and then save then to pvr again or png. Any advice ?
This is apple example program that shows you how to load PVR texture files using the included PVRTexture class and then display them using OpenGL.
Do you specifically mean compressed PVRTC textures or any of the formats (e.g. 565, 1555) supported under the PVR? Also, what sort of transformations did you want to do to the colours?
The reason I ask is that, IIRC, there is code to read/manipulate PVR files on the Imagination Technologies dev web pages but if you want to change the colours of PVRTC compressed textures without actually recompressing the data entirely, there will be limits to what you can achieve. Certainly, changing the hue of regions etc will be possible, but manipulating individual pixels is likely to be too difficult.

approach for recording grayscale video on iphone?

I am building an iphone app that needs to record grayscale video and save it to the camera roll. I'm stumped at how best to approach this.
I am thinking along the following lines:
Use a shader and opengl to transform the video to grayscale
Use AVFoundation (AVAssetWriter with an AVAssetWriterInputPixelBufferAdaptor) to write the video to the file.
My questions are:
Is this the right approach (simplest, best performance)?
If so, what would be the best way to go from opengl output to a CVPixelBufferRef input for the AVAssetWriterInputPixelBufferAdaptor?
If not, what would be a better approach?
Any nudge in the right direction is much appreciated!
In general, I'd agree with this approach. Doing your processing in an OpenGL ES 2.0 shader should be the most performant way of doing video frame alteration like this, but it won't be very simple. Fortunately, you can start from a pre-existing template that already does this.
You can use the sample application I wrote here (and explained here) as a base. I use custom shaders in this example to track colors in an image, but you could easily alter this to convert the video frames to grayscale (I even saw someone do this once). The code for feeding camera video into a texture and processing it could be used verbatim from that sample.
In one of the display options within that application, I render the processed image first to a framebuffer object, then use glReadPixels() to pull the resulting image back into bytes that I can work with on the CPU. You could use this to get the raw image data back after the GPU has processed a frame, then feed those bytes into CVPixelBufferCreateWithBytes() to generate your CVPixelBufferRef for writing to disk.
(Edit: 2/29/2012) As an update to this, I just implemented this kind of video recording in my open source GPUImage framework, so I can comment on the specific performance for the encoding part of this. It turns out that you can capture video from the camera, perform live filtering on it, grab it from OpenGL ES using glReadPixels(), and write that out as live H.264 video in 640x480 frames on an iPhone 4 at 30 FPS (the maximum camera framerate).
There were a few things that I needed to do in order to get this recording speed. You need to make sure that you set your AVAssetWriterInputPixelBufferAdaptor to use kCVPixelFormatType_32BGRA as its color format for input pixel buffers. Then, you'll need to re-render your RGBA scene using a color-swizzling shader to provide BGRA output when using glReadPixels(). Without this color setting, your video recording framerates will drop to 5-8 FPS on an iPhone 4, where with it they are easily hitting 30 FPS. You can look at the GPUImageMovieWriter class source code to see more about how I did this.
Using the GPUImage framework, your above filtering and encoding task can be handled by simply creating a GPUImageVideoCamera, attaching a target of a GPUImageSaturationFilter with the saturation set to 0, and then attaching a GPUImageMovieWriter as a target of that. The framework will handle the OpenGL ES interactions for you. I've done this, and it works well on all iOS devices I've tested.

Why there is .pvr file in OpenGL(IOS)

I am making application with OpenGL in IOS using PVR texture for making 3D effect.I couldn't understand about .pvr files.So please friends would you give idea about .pvr files and what's importance of it in OpenGL and how can i make it?
PVR file is a container for various texture format such as PVRTC, RGB565 and so forth. You can use directly these texture formats as is. If you use PNG, pixels might be pre-multiplied alpha.
PVRTC is compressed texture format that is natively supported by GPU (PowerVR MBX or SGX). GPU can render PVRTC effectively. It would increase framerate.
PVR Textures and Memory
Using texturetool to Compress Textures
PVRTexTool
They are compressed texture files. You can convert more common formats into it using texturetool that comes with Xcode. Compressed textures save bandwidth, loading times and memory and speed up your application because they are compressed also in the video memory. They can also contain mipmaps.

jpeg to png conversion

I am working on images in iPhone. There are lots of jpeg images which range from 35kb to 50kb. I may need to transfer this over internet which comes around 6 mb. I tried to change a 35kb jpeg image to png. The actual size got increase jpeg was 56.1kb and png is 576 kb. I used mspaint to change the format. jpeg to png should actually decrease the size of the image right ? If no is that ideal to have jpeg files on iphone or only png like typical mobile applications have ?
JPEG and PNG are very different file formats; any given image that is smaller in one may not be smaller in another. And furthermore, their quality is not directly comparable.
For example, photographic content is very well represented in JPEG. The subdivision-of-blocks composed with pattern recognition makes for a format that does a very good job of discarding visual information in a way that human eyes do not easily notice. Of course, a highly-compressed JPEG may throw away too much information and show the blocks and instantly break the illusion of photographic reality, but used carefully, JPEG is fantastic for photos of the 'real world'.
And computer-generated content is very well represented in PNG. The lossless encoding is great for showing the straight lines of standard computer-generated displays, and naively-created gradients are replicated exactly with PNG. Had JPEG been used for either straight lines or naive gradients, the shortcomings would stand out instantly. Also, because PNG can be palette-based, it can very efficiently store images with only a few dozen colors.
So, pick the file format based on its use: JPEG for photos of reality or for very good approximations of reality, and PNG for computer-generated content.
PNG files are usually smaller if their contents are graphical and contain a lot of evenly colored shapes. For photos or scans jpeg files are way smaller, since they use a much more sophisticated, yet lossy, algorithm for compression.
For your iPhone project you should use whatever is smaller, in your case jpeg.

How does Quartz handle texture compression?

I'm developing on the iPhone and the majority of our game is using OpenGL ES, but there are also menus that use CGImage and Quartz in order to be displayed. In OpenGL ES, I know that no matter what image compression goes in (JPG, PNG, etc.), the data stored in memory as a texture is an 8-bit texture, unless I use PVRTC in which case I can get it to 2 or 4 bits. We've been having memory issues due to large CGImages, so my question is... what sort of optimizations and compressions do Quartz and CGImage use? I can't find the details in Apple's docs, when really I want to know if it would make a difference to put a 256-color image in, or a JPG vs a PNG, if having the dimensions at a power of 2 help, etc. Speed is unimportant, memory is the bottleneck here.
Thanks.
Quartz is uncompressed. It is for quickly compositing and rendering pixel accurate content. Once your images have been drawn into a context it doesn't matter where they came from, they take whatever that context takes per pixel for however many pixels they have (generally 4 bytes per pixel in a device if I recall correctly). The one big thing it does is premultiplies the alpha to avoid blending.
Now, some views under memory pressure can evict their contents if not displayed, and reconstitute them as needed. In those cases a CGImage from a compressed source generally ends up taking less memory, but I suspect that is not relevant in the case you described.