Ignore extra white space in Unity3D Texture - unity3d

I have different textures for a player's helmet, shirt and pants in order to render custom uniforms. They have white space so it lays on the model correctly, but this is causing the App's file size to be huge once installed because the game has over a hundred items and each texture is 2.7 MB.
How can I tell Unity to ignore parts of the image or map the textures onto the player so that I do not need the white space? For example, cutting the whitespace out of the helmet image lowers the size to under a MB.
Thanks!

For the sake of others who read this:
The obvious answer is, cut out the empty spaces in an image editor. That will solve the problem in the way it really should be solved.
That being said, it's quite possible you are using poorly UV mapped models that need that space, and you are unable to fix this, as the person who asked this question is.
If you're in a position where it might cost a little time or money to get someone to fix it, you should, because no matter what, you're wasting space, and it will add up. No one wants a 100Mb download to get 50Mb worth of game. And if you payed someone for models and they came like this, consider taking it up with them, because this is a somewhat major flaw.
The "real" answer:
The first thing you should do is enable compression. From your picture it appears you are using the RGBA 16-bit format. This is a lower quality version of Truecolor, an uncompressed 32-bit format, but is not compressed in the "traditional" sense.
You should use the "Compressed" image import setting (To see it you must turn off Advanced settings). This will select one of several compression formats (depending on the platform), all of which are highly optimized. You can define a specific compression in the Advanced window, but it is rarely needed, as Unity is great at choosing the right one for a given situation, and can can take special cases (such as specific chipsets) into consideration.
Depending on the compression algorithm, that white space could easily end up taking next to no space, and depending on the image, the compression might end up virtually undetectable.
On average the "Compressed" setting can create several orders of magnitude of a reduction of image size.
From there, if your image is still to large you can experiment with import size. This creates a fairly linear change in space taken and quality image. You are importing at 1024x1024 right now. Importing at 512x512 will about half the amount of space taken, and half the resolution of your image, but depending on the art style, the change can often be negligible visually.
You can for more details on these changes in the documentation for the texture importer

Related

is non-square tilable atlas effective

I have a building model with 6K faces and I want to texture with some pretty high detail 512x512 tilable textures (which represent about 32cm x 32cm), and I'd like to be as mobile-friendly as possible, but not necessarily with old phones but for like GearVR capable phones.
The model happens to have mostly long horizontal quads eg
|-----------|---|----------------|
|-----------|---|----------------|
|-----------| |----------------|
|-----------| |----------------|
|-----------|---|----------------|
|-----------|---|----------------|
So the uv's of each of those horizontal sections can be stacked on one tileable texture, to achieve both horizontal and vertical tiling.
Further, if the tiles were 512x512 textures, I could stack 8 of them in a 512x4096 non-square (but power of two) texture.
That way I could texture the main mesh with a single texture or one extra for metalic.
Is this reasonable, or should I keep them as separate 512x512 textures? Wouldn't separate textures mean like 8x the draw calls which would be far worse than a non-square 512x4096 texture?
After some research, I found the technique of stacking textures and tiling horizontally is called using a trim sheet, and is very much valid, and used extensively game development to be able to re-use high-detail textures on many different objects.
https://www.youtube.com/watch?v=IziIY674NAw
The trim sheet info I found though did not cover 'non square' which is the main question. But from several sources I found that some devices do not support non-quare and some do, and some do but don't do compression well on non-square, so it's a 'check your target devices' issue.
Assuming a device does support non-square, it should in fact save memory to have a strip of textures, and should save draw calls, but your engine may just 'repeat them horizontally until square' for you when importing the texture to 'be safe' (so again, check target devices and engines). It would perhaps be wise to limit to 4 rather than 16 stacked textures, to avoid 'worst case scenarios'.
Hopefully, the issue will be addressed by either having video cards able to do several materials in 1 draw call, or by more universal handling the texture strips well, but it seems state of the art has not focused on that yet.
Another solution is more custom, but some people have created custom shaders that use vertex color information on a mesh to choose which part of a texture to use, and then tile from there. Apparently the overhead turned out to be quite low, and it was a success, so it's good to have an idea about 'backup plans'. This however would be an engine/environment/device specific kind of optimization, not a general modeling practice.
http://www.gamasutra.com/blogs/MarkHogan/20140721/221458/Unity_Optimizing_For_Mobile_Using_SubTile_Meshes.php

How can I process an image to remove a watermark within my iPhone application?

I want to remove watermark from a picture within my iPhone / iPad application. Is there any kind of image processing I can perform within this application to do this?
Can't be done, sorry.
The watermarked image were originally two images (the base and the watermark), which were merged together to form the result. The problem here is that the most common image formats (such as JPG, PNG, or GIF) have no concept of layers - so that the base would be one layer, and the watermark another: the result is just one layer, onto which both were redrawn. This is somewhat similar to a physical painting: if you paint one image on a paper using watercolors, and then another over the same spot, their colors will mix and you won't be able to tell which parts belong to one or the other, as they'd become a single image.
This is similar with the computer image formats: there is only one "layer", which for every pixel encodes exactly one color that is there - only the current color exists, and the image doesn't keep track what was on that pixel before.
Now, the information is irreversibly lost from the result - in other words, it is not possible to recover the base knowing just the result (or the result and watermark) - BTW, that's exactly the point of watermarking.
I have borrowed the image sprites of StackOverflow for a demonstration; the actual images used are not unique, the technique would work just as well with any images. This was the watermark I used:
And this is the result image, after merging with the base:
Now, even though we have the exact watermark image used, there's no way to recover what was underneath that star in the original image. Through image processing operations, we could almost remove the star from the result, but there's not enough data to tell us what used to be underneath: - that information got erased in the merge at the beginning.
We could guess what used to be there, but then we're not doing recovery any more, we're interpreting the image and guessing what possibly could have been there - and that's pretty hard, even for a human; computers are really bad at that. This is the original image, before I watermarked it - I bet you were expecting something slightly different, no?
The watermark is almost certainly part of the image. (The only case in which it wouldn't be is something like PDF or SVG, where it could be a separate vector element.)
Watermarks are typically present on images for purposes of managing intellectual property; if one has licensed an image for a particular use, typically one will receive access to a version of the image without a watermark. Thus wanting to "remove watermarks" is also likely to be treated as highly suspicious.
Watermarks are part of the image, there isn't going to be a magic way to remove them and recover the missing pixels in any tool.
Take a look at the source! Most or the current watermarking is done in php as an automated script. In most cases you will see the base picture in source

PVR Texture Compression Tiling (exposing edge context)

I've got PVR texture compression working all happy and good in my iPhone game, but I've got issues when tiling multiple textures together. Basically, I've got a very large background which is split into multiple 512x512 tiles, all PVR compressed. Then they're drawn together to look like one big background image. The way PVR works, because it doesn't know that it's supposed to be compressing the texture as if it were a really big texture - i.e. use a neighbor's tiled information to determine how to perform the PVR compression.
I can think of maybe a couple ways to do this.
1) Somehow tell the texturetool command line program to accommodate for other images that will be adjacent.
2) Use the command line program to generate a huge PVR texture that represents the whole image, then somehow split up the bytes into multiple images - probably impossible.
3) Do some kind of OpenGL ES trickiness that blends the edges nicely.
4) Do some trickiness where I have redundant information in each tile and then clip those areas when the texture is drawn (please no).
Hopefully I can do 1, 2 or 3, or there is some other well known solution.
I ended up going with option 4. I don't think this was a situation where PVRTC isn't appropriate - in fact it's almost a necessity. When I've got a total of 24 512x512 textures in memory at once (representing a very large background and foreground), putting those in uncompressed is suicide. So I simply used PVR compression as normal, then my edited a few lines of code in my tiling algorithm so that they overlap and trim on 15 pixels on each end. Voila, looks great. Took a couple days and was pretty annoying, but I think this is a good option for people who need very large tiled backgrounds on the iPhone.
My best advice, but not what you asked, is to know when PVRTC isn't appropriate. By far the simplest solution is to just not use PVRTC for those tiles. I've spent a lot of time trying to bend PVRTC to work in situations it just isn't suited for.
That said,
When using PVRTC, the texture is always assumed to tile (with itself), thus pixels on the right edge affect the pixels on the left edge (same with top & bottom). So choices 1 or 2 likely won't work.
One possibility is to add an alpha channel to the tiles and allow them to fade out around the edges so that when you overlap them, they fade into each other. Keep in mind PVRTC tends to work better with gradual alpha fades. Hard alpha edges often have artifacts in PVRTC.

Performance-wise: A lot of small PNGs or one large PNG?

Developing a simple game for the iPhone, what gives a better performance?
Using a lot of small (10x10 to 30x30 pixels) PNGs for my UIViews' backgrounds.
Using one large PNG and clipping to the bounds of my UIViews.
My thought is that the first technique requires less memory per individual UIView, but complicates how the iPhone handles the large amount of images, as it tries to combine the images into a larger texture or tries to switch between all the small textures a lot.
The second technique, on the other hand, gives the iPhone the opportunity to handle just one large PNG, but unnessicarily increases the image weight every UIView has to carry.
Am I right about the iPhone's attempts, handling the images the way I described it?
So, what is the way to go?
Seeing the answers thus far, there is still doubt. There seems to be a trade-off with two parameters: Complexity and CPU-intensive coding. What would be my tipping point for deciding what technique to use?
If you end up referring back to the same CGImageRef (for example by sharing a UIImage *), the image won't be loaded multiple times by the different views. This is the technique used by the videowall Core Animation demo at the WWDC 07 keynote. That's OSX code, but UIViews are very similar to CALayers.
The way Core Graphics handles images (from my observation anyway) is heavily tweaked for just-in-time loading and aggressive releasing when memory is tight.
Using a large image you could end up loading the image at draw time if the memory for the decoded image that CGImageRef points to has been reclaimed by the system.
What makes a difference is not how many images you have, but how often the UIKit traverses your code.
Both UIViews and Core Animation CALayers will only repaint if you ask them to (-setNeedsDisplay), and the bottleneck usually is your code plus transferring the rendered content into a texture for the graphics chip.
So my advice is to think your UIView layout in a way that allows portions that change together to be updated all at the same time, which turn into a single texture upload.
One large image mainly gives you more work and more headaches. It's harder to maintain but is probably less ram intensive because there is only one structure + data in memory instead of many structures + data. (though probably not enough to notice).
Looking at the contents of .app bundles on regular Mac OS, it seems the generally approved method of storage is one file/resource per image.
Ofcourse, this is assuming you're not getting the image resources from the web, where the bottleneck would be in http and its specified maximum of two concurrent requests.
One large gives you better performance. (Of cause if you should render all pictures anyway).
One large image will remove any overhead associated with opening and manipulating many images in memory.
I would say there is no authoritative answer to this question. A single large image cuts down (slow) flash access and gets the decode done in one go but a lot of smaller images give you better control over what processing happens when... but it's memory hungry and you do have to slice that image up or mask it.
You will have to implement one solution and test it. If it isn't fast enough and you can't optimise, implement the other. I suggest implementing the version which you personally find easier to imagine implementing because that will be easiest to implement.

Why do images for textures on the iPhone need to have power-of-two dimensions?

I'm trying to solve this flickering problem on the iphone (open gl es game). I have a few images that don't have pow-of-2 dimensions. I'm going to replace them with images with appropriate dimensions... but why do the dimensions need to be powers of two?
The reason that most systems (even many modern graphics cards) demand power-of-2 textures is mipmapping.
What is mipmapping?
Smaller versions of the image will be created in order to make the thing look correctly at a very small size. The image is divided by 2 over and over to make new images.
So, imagine a 256x128 image. This would have smaller versions created of dimensions 128x64, 64x32, 32x16, 16x8, 8x4, 4x2, 2x1, and 1x1.
If this image was 256x192, it would work fine until you got down to a size of 4x3. The next smaller image would be 2x1.5 which is obviously not a valid size. Some graphics hardware can deal with this, but many types cannot.
Some hardware also requires a square image but this isn't very common anymore.
Why do you need mipmapping?
Imagine that you have a picture that is VERY far away, so far away as to be only the size of 4 pixels. Now, when each pixel is drawn, a position on the image will be selected as the color for that pixel. So you end up with 4 pixels that may not be at all representative of the image as a whole.
Now, imagine that the picture is moving. Every time a new frame is drawn, a new pixel is selected. Because the image is SO far away, you are very likely to see very different colors for small changes in movement. This leads to very ugly flashing.
Lack of mipmapping causes problems for any size that is smaller than the texture size, but it is most pronounced when the image is drawn down to a very small number of pixels.
With mipmaps, the hardware will have access to 2x2 version of the texture, so each pixel on it will be the average color of that quadrant of the image. This eliminates the odd color flashing.
http://en.wikipedia.org/wiki/Mipmap
Edit to people who say this isn't true anymore:
It's true that many modern GPUs can support non-power-of-two textures but it's also true that many cannot.
In fact, just last week I had a 1024x768 texture in an XNA app I was working on, and it caused a crash upon game load on a laptop that was only about a year old. It worked fine on most machines though. It's a safe bet that the iPhone's gpu is considerably more simple than a full PC gpu.
Typically, graphics hardware works natively with textures in power-of-2 dimensions. I'm not sure of the implementation/construction details that cause this to be the case, but it's generally how it is everywhere.
EDIT: With a little research, it turns out my knowledge is a little out of date -- a lot of modern graphics cards can handle arbitrary texture sizes now. I would imagine that with the space limitations of a phone's graphics processor though, they'd probably need to omit anything that would require extra silicon like that.
You can find OpenGL ES support info about Apple Ipod/Iphone devices here:
Apple OpenES support
OpenGL ES 2.0 is defined as equal to OpenGL 2.0
The constraint about texture size's has been disappear only from version 2.0
So if you use OpenGL ES with version less then 2.0 - it is normal situation.
I imagine it's a pretty decent optimization in the graphics hardware to assume power-of-2 textures. I bought a new laptop, with latest laptop graphics hardware, and if textures aren't power-of-2 in Maya, the rendering is all messed up.
Are you using PVRTC compression? That requires powers of 2 and square images.
Try implementing wrapping texture-mapping in software and you will quickly discover why power-of-2 sized are desirable.
In short, you will find that if you can assume power-of-2 dimensions then a lot of integer multiplications and divisions turn into bit-shifts.
I would hazard a guess that the recent trend in relaxing this restriction is due to GPUs moving to floating-point maths.
Edit: The "because of mipmapping" answer is incorrect. Mipmapped, non-power-of-two textures are a common feature of modern GPUs.