I already know that if Unity is using a texture with the dimensions 250X250 it will pad the texture to 256X256 so as to make the dimensions a power of 2. If I were to have a texture of size 512X256 would it pad to 512X512 to make the texture a square or would it stay at 512X256 as each side is already a power of 2?
It should keep the 512x256 resolution. Each side needs to be a power of two, they don't have to be equal.
Note, that it's not exactly true that Unity will change the dimensions of each your texture. You can change the texture properties to legacy GUI and then you can have a pixel perfect texture. It will be wasteful or slower a bit (depending on GPU drivers), but it will work quite well.
Related
I'm making a simple 3D game using my own shader, and want to emulate an 8-bit pixel art style. To do this, I need to sample a low-resolution texture without the sampler interpolating between inter-pixel values. Hopefully this will also reduce processing time since it won't need to calculate the interpolation. Is there a way I can accomplish this?
Ok, turns out there was something I didn't realise about 2D textures. I'd already set its Filter Mode to 'point', which turns off interpolation in the sampling, but I hadn't noticed that there's a box marked 'Non-Power of 2'. What this does is, if your texture has dimensions that aren't a power of 2 (i.e. 1024) it upscales it so that it is. It does this so that it can apply compression to the texture, since compression only works on power-of-2 images in Unity. However, it applies antialiasing to the upscale regardless of Sample Mode, so the interpolation was actually being 'baked in' by the compiler and had nothing to do with the sampler or the shader whatsoever.
Turning this OFF solved the problem; however once I'm happy with my textures I will be expanding them to a power-of-2 size with trailing padding in order to take advantage of it.
Does Unity's renderer read the entire texture, or only the pixels the UVs overlap?
For example, in the following texture with the following UVs, only rows C, D, E and F are needed. Disregarding the extra storage space the rest of the texture occupies, are there any drawbacks to doing this?
Does the renderer read the entire texture or only the relevant pixels?
Unity would keep the whole texture in memory. Texture mapping is done in shaders.
That's why its recommended to try and occupy as much UV space as possible. You can even go further and use same texture for multiple objects.
Even tho this only covers opengl, it is a good resource for understanding how all of this works. https://learnopengl.com/Getting-started/Textures
I'm trying to figure out why my Object's textures keep turning white once I scale the object down to 1% (or less) of its normal size.
I can manipulate the objects realtime with my fingers and there is a threshold where all the textures (except a few) turn completely ghost white, as shown below:
https://imgur.com/wMykeFw
Any input to fix is appreciated!
One potential cause of this issue is due to how certain shaders can miscalculate how to render textures when scales are set to low values.
To be able to render this asset so small using the same shader, re-import the mesh with a smaller scale factor (in the mesh import settings), and that may fix it.
select ARCamera then camera, in the inspector, select the cameras clipping plane and increase it(you want to find the minimum possible clipping that works to save on memory, so start at 20000, and work your way backwards til it stops working, then back up a notch).
next (still in the cameras inspector), select Rendering Path and set it to Legacy Vertex Lit
this should clear it up for you
In HLSL, how can I calculate lighting based on pixels of a texture, instead of pixels that make up the object?
In other words, if I have a 64x64px texture being rendered on a 1024x768px screen, I want to calculate the lighting as it affects the 64x64px space, resulting in jagged pixels instead of a smooth line.
I've researched dozens of answers but I'm not sure how I can determine at all times if a fragment is a part of a pixel that should be fully lit or not. Maybe this is the wrong approach?
The current implementation uses a diffuse texture and a normal map. It results in what appear as artifacts (diagonal lines) in the output:
Note: The reason it almost looks correct is because of the normal map, which causes some adjacent pixels to have normals that are angled just enough to light some pixels and not others.
Does any body know how to find average luminosity for a texture in a fragment shader? I have access to both RGB and YUV textures the Y component in YUV is an array and I want to get an average number from this array.
I recently had to do this myself for input images and video frames that I had as OpenGL ES textures. I didn't go with generating mipmaps for these due to the fact that I was working with non-power-of-two textures, and you can't generate mipmaps for NPOT textures in OpenGL ES 2.0 on iOS.
Instead, I did a multistage reduction similar to mipmap generation, but with some slight tweaks. Each step down reduced the size of the image by a factor of four in both width and height, rather than the normal factor of two used for mipmaps. I did this by sampling from four texture locations that were in the middle of the four squares of four pixels each that made up a 4x4 area in the higher-level image. This takes advantage of hardware texture interpolation to average the four sets of four pixels, then I just had to average those four pixels to yield a 16X reduction in pixels in a single step.
I converted the image to luminance at the very first stage using a dot product of the RGB values with a vec3 of (0.2125, 0.7154, 0.0721). This allowed me to just read the red channel for each subsequent reduction stage, which really helps on iOS hardware. Note that you don't need this if you are starting with a Y channel luminance texture already, but I was dealing with RGB images.
Once the image had been reduced to a sufficiently small size, I read the pixels from that back onto the CPU and did a last quick iteration over the remaining few to arrive at the final luminosity value.
For a 640x480 video frame, this process yields a luminosity value in ~6 ms on an iPhone 4, and I think I can squeeze out a 1-2 ms reduction in that processing time with a little tuning. In my experience, that seems faster than the iOS devices normally generate mipmaps for power-of-two images at around that size, but I don't have solid numbers to back that up.
If you wish to see this in action, check out the code for the GPUImageLuminosity class in my open source GPUImage framework (and the GPUImageAverageColor superclass). The FilterShowcase example demonstrates this luminosity extractor in action.
You generally don't do this just with a shader.
One of the more common methods is to create a buffer texture with full mip-maps (down to 1x1, this is important). When you want to find luminosity, you copy the backbuffer to this buffer, then regenerate mips with a nearest neighbor algorithm. The bottom pixel will then have the average color of the entire surface and can be used to find average lum through something like (c.r * 0.6) + (c.g * 0.3) + (c.b * 0.1) (edit: if you have a YUV, then do similar and use the Y; the trick is just averaging the texture down to a single value, which is what mips do).
This isn't a precise technique, but is reasonably fast, especially on hardware that can generate mipmaps internally.
I'm presenting a solution for the RGB texture here as I'm not sure mip map generation would work with a YUV texture.
The first step is to create mipmaps for the texture, if not already present:
glGenerateMipmapOES(GL_TEXTURE_2D);
Now we can access the RGB value of the smallest mipmap level from the fragment shader by using the optional third argument of the sampler function texture2D, the "bias":
vec4 color = texture2D(sampler, vec2(0.5, 0.5), 8.0);
This will shift the mipmap level up eight levels, resulting in sampling a far smaller level.
If you have a 256x256 texture and render it with a scale of 1, a bias of 8.0 will effectively reduce the picked mipmap to the smallest 1x1 level (256 / 2^8 == 1). Of course you have to adjust the bias for your conditions to sample the smallest level.
OK, now we have the average RGB value of the whole image. The third step is to reduce RGB to a luminosity:
float lum = dot(vec3(0.30, 0.59, 0.11), color.xyz);
The dot product is just a fancy (and fast) way of calculating a weighted sum.