build a tiled big texture from other textures - unity3d

I am making a unity 2D RTS game and I thought of using a big texture for the tiled map (instead of a lot of textures - for the memory reasons...).
The tiled map is supposed to generate randomly at runtime so I don't want to save a texture and upload it. I want the map to be generated and then build it from a set of textures resources.
so, I have a little tiles textures of grass/forest/hills etc. and after I generate the map randomly, I need to draw those little textures on my big map texture so I will use it as my map.
How can I draw a texture from my resources on other texture? I saw there is only a Get/SetPixel functions... so I can use it to copy all the pixels one by one to the big texture, but there is something easier?
Is my solution for the map is OK? (is it better from just create a lot of texture tiles side by side? There is other better solution?)

The correct way to create a large tiled map would be to compose it from smaller, approximately-screen-sized chunks. Unity will correctly not draw the chunks that are off the screen.
As for your question about copying to a texture: I have not done this before in Unity, but this process is called Blitting, and there just happens to be a method in Unity called Graphics.Blit(). It takes a source texture and copies it into a destination texture, which sounds like exactly what you're looking for. However, it requires Unity Pro :(
There is also SetPixels(), but it sounds like this function does the processing on the CPU rather than the GPU, so it's going to be extremely slow/resource-intensive.

Well, after more searching I discovered the Get/SetPixel s
Texture2D sourceTex = //get it from somewere
var pix = sourceTex.GetPixels(x, y, width, height); // get the block of pixels
var destTex = new Texture2D(width, height); // create new texture to copy the pixels to it
destTex.SetPixels(pix);
destTex.Apply(); // important to save changes

Related

Are unused texture pixels read by Unity's renderer?

Does Unity's renderer read the entire texture, or only the pixels the UVs overlap?
For example, in the following texture with the following UVs, only rows C, D, E and F are needed. Disregarding the extra storage space the rest of the texture occupies, are there any drawbacks to doing this?
Does the renderer read the entire texture or only the relevant pixels?
Unity would keep the whole texture in memory. Texture mapping is done in shaders.
That's why its recommended to try and occupy as much UV space as possible. You can even go further and use same texture for multiple objects.
Even tho this only covers opengl, it is a good resource for understanding how all of this works. https://learnopengl.com/Getting-started/Textures

Paint on mesh for makeover

I'm now struggling for weeks on a part of the game I'm making.
As a beginner in Unity and programming, I need your experience and advice to understand how can I paint on skinned mesh like this (from 1:10):
https://www.youtube.com/watch?v=grVEK1Bb6ZM
I spend a lot of time to find a solution with no result. (Decal shader to separate texture, paint on mesh with alpha, project texture, merge texture .. ). But these solutions look bad for mobile or not exactly what I need.
So If someone know a way to do that, even a little info or anything, that will drive my research, it's very welcome.
Thank you !
The example you provide limits the range of the painting with a bitmap mask (ie on the eyebrows, or on the lips), so the painting is only meant for a more enjoyable UX. If this is what you need, you should probably do something like this:
You need to know where the mouse is interacting with the model. Raycasting is expensive and requires to update the colliders every frame, since you character is skinned. If you use the masking trick of your example, this dramatically reduces the amount of computation, since you could pass a subset of the mesh containing only that specific area (maybe just the face for ex)
see https://docs.unity3d.com/ScriptReference/SkinnedMeshRenderer.BakeMesh.html
and https://answers.unity.com/questions/39490/collider-on-skinned-mesh.html
(if you can't, there could be other tricks, like rendering the character's UV into a separate float buffer/texture, and sample that buffer using the mouse position)
Once you can raycast the mesh you can fetch the UV position of the hit
https://docs.unity3d.com/ScriptReference/RaycastHit-textureCoord.html
Using those UVs you can write to a texture, or instance particles/objects on a render target etc (there are many options here).
You then need to combine that texture with the bitmap mask in the shader of the character.

In OpenGL ES2.0 for iOs, how can I use a CVPixelBufferRef to update a cubemap texture?

I have managed to get a CVPixelBufferRef from an AVPlayer to feed pixel data that I can use to texture a 2D object. When my pixelbuffer has data in it I do:
CVReturn err = CVOpenGLESTextureCacheCreateTextureFromImage('
kCFAllocatorDefault,
videoTextureCache_,
pixelBuffer, //this is a CVPixelBufferRef
NULL,
GL_TEXTURE_2D,
GL_RGBA,
frameWidth,
frameHeight,
GL_BGRA,
GL_UNSIGNED_BYTE,
0,
&texture);
I would like to use this buffer to create a GL_TEXTURE_CUBE_MAP. My video frame data is actually 6 sections in one image (e.g. a cubestrip) that in total makes the sides of a cube. Any thoughts on a way to do this?
I had thought to just pretend my GL_TEXTURE_2D was a GL_TEXTURE_CUBE_MAP and replace the texture on my skybox with the texture generated by the code above, but this creates a distorted mess (as I suppose should be expected when trying to force a skybox to be textured with a GL_TEXTURE_2D.
The other idea was to setup unpacking using glPixelStorei and then read from the pixelbuffur:
glPixelStorei(GL_UNPACK_ROW_LENGTH, width);
glPixelStorei(GL_UNPACK_SKIP_PIXELS, X);
glPixelStorei(GL_UNPACK_SKIP_ROWS, Y);
glTexImage2D(...,&pixelbuffer);
But unbelievably GL_UNPACK_ROW_LENGTH is not supported in OpenGl ES2.0 for iOS.
So, is there:
-Any way to split us the pixel data in my CVPixelBufferRef through indexing the buffer to some pixel subset before using it to make a texture?
-Any way to make a 6 new GL_TEXTURE_2D as indexed subsets of my GL_TEXTURE_2D that is created by the code above
-any way to convert a GL_TEXTURE_2D to a valid GL_TEXTURE_CUBE_MAP (e.g. GLKit has a Skybox effect that loads a GL_TEXTURE_CUBE_MAP from a single cubestrip file. It doesnt have a method to load a texture from memory though or I would be sorted)
-any other ideas?
If it were impossible any other way (which is unlikely, there probably is an alternate way -- so this is probably not the best answer & involves more work than necessary) here is a hack I'd try:
How a cube map works is it projects the texture for each face from a point in the center of the geometry out toward each of the cube faces. So you could reproduce that behavior yourself; you could use Projective Texturing to make six draw calls, one for each face of your cube. Each time, you'd first draw the face you're interested in to the stencil buffer, then calculate the projection matrix for your texture (this technique is used a lot for 'spotlight' effects in games), then figure out the transform matrix required to augment the fragment shader's texture read so that for each face, only the portion of the texture that corresponds to that face winds up within the (0..1) texture lookup range. If everything has gone right, anything outside the 0..1 range should be discarded by the stencil buffer, and you'd be left with a DIY cube map out of a TEXTURE_2D.
The above method is actually really similar to what I'm doing for an app right now, except I'm only using projective texturing to mask off & replace a small portion of the cube map. I need to pixel-match the edges of the small square I'm projecting so that it's seamlessly applied to the skybox, so that's why I feel confident that this method will actually reproduce the cube map behavior -- otherwise, pixel-matching wouldn't be possible.
Anyway, I hope you find a way to simply transition your 2D to CUBEMAP, because that would probably be much easier and cleaner.

Tiling an image that is part of a texture atlas

I'm using Cocos2D. What is the most efficient way to tile an image when it's part of a texture atlas that's been generated using Texture Packer. I have an image that is 10 x 320 and I want to tile it to fill the screen.
I've used this code before for tiling images
bgHolder = [CCSprite spriteWithFile:#"bg.png" rect:CGRectMake(0, 0, 700, 300*155)];
ccTexParams params = {GL_LINEAR,GL_LINEAR,GL_REPEAT,GL_REPEAT};
[bgHolder.texture setTexParameters:&params];
[self addChild:bgHolder];
but I don't think I can use this approach when the image I want to tile isn't square and is only a small part of the over al texture.
Chaining a bunch of CCSprites seems pretty inefficient to me so I'm hoping there is a better way.
Use one sprite per tile. That's the way to do it. You should use sprite batching to keep the number of draw calls to 1. Rendering 48 sprites is not much worse than rendering one 480x320 sprite when using sprite batching.

How to collapse five OpenGL textures into one?

I want to merge 5 "sublayers" to one single texture (you know, somethin' like Flatten Image in Photoshop) in OpenGL ES 1.x iPhone. I'm new to OpenGL, and just haven't find the answer yet.
assuming they are images to begin with can't you just draw them sequentially onto an in memory image
code
You don't need GL to combine textures together. Just do the math on each texel in C.
Now, if you want to use GL, you'll want to render to a texture (your final result).
This is done with OES_framebuffer_object. Now how you draw to that texture is completely up to you. You could draw 5 quads, each with a single texture, and use blending to merge them (you'll have to specify which math you want to apply though), you can use multi-texturing to do the work in less passes (and use Texture Environments to specify how to merge).
What kind of flattening operation do you want ?