Can I create a tile engine in a pixel shader? - texture2d

I am attempting to create a tile engine using a pixel shader and two textures. One texture will hold the tileset and one the map.
Is it possible to read the texture data as actual (unsampled) data so I can pull indexes from the map?
What is the best way to read that pixel data?
I have tried just text2D but that leaves something to be desired (I am a bit new to pixel shaders to be honest).
Basically, I need a way to read the actual data from a specific pixel in my map texture and use that as an integer index into the tile texture. Assume I have managed to create and pass the appropriate textures to the shader.
Any thoughts?
(using monogame for metro so dx level 9_1)

If you use tex2D and pass in (x + 0.5) / width and (y + 0.5) / height, you should get the exact pixel value at (x, y). More information here: Texture memory-tex2D basics

Related

Evaluate depth for orthographic camera

I have a post processing shader. For simplicity, my post processing shader only shows the _CameraDepthTexture at the given uv. This shader is written in code.
I'm moving to shader graph and I want to have a material for all of my objects and achieve the exact same effect (show the same depth color), althought I can't use Scene Depth node. How can I generate the exact same color for my objects in Shader Graph?
As the depth is related to the distance between the camera and the objects, I'm trying to set the depth like this:
I take the vector (vertex world position - camera world position).
I project this vector into the camera direction vector
I remap this length of the projection vector from (near plane, far plane) to (1, 0)
It looks like my depth is the same as _CameraDepthTexture, but when objects are too close to the camera, they are different (my version is darker).
How can I write a shader without Scene Depth node that generates the exact same color as _CameraDepthTexture? My camera is orthographic with orthographic size 10.4, near = -50 and far = 50.

ARKit project point with previous device position

I'm combining ARKit with a CNN to constantly update ARKit nodes when they drift. So:
Get estimate of node position with ARKit and place a virtual object in the world
Use CNN to get its estimated 2D location of the object
Update node position accordingly (to refine it's location in 3D space)
The problem is that #2 takes 0,3s or so. Therefore I can't use sceneView.unprojectPoint because the point will correspond to a 3D point from the device's world position from #1.
How do I calculate the 3D vector from my old location to the CNN's 2D point?
unprojectPoint is just a matrix-math convenience function similar to those found in many graphics-oriented libraries (like DirectX, old-style OpenGL, Three.js, etc). In SceneKit, it's provided as a method on the view, which means it operates using the model/view/projection matrices and viewport the view currently uses for rendering. However, if you know how that function works, you can implement it yourself.
An Unproject function typically does two things:
Convert viewport coordinates (pixels) to the clip-space coordinate system (-1.0 to 1.0 in all directions).
Reverse the projection transform (assuming some arbitrary Z value in clip space) and the view (camera) transform to get to 3D world-space coordinates.
Given that knowledge, we can build our own function. (Warning: untested.)
func unproject(screenPoint: float3, // see below for Z depth hint discussion
modelView: float4x4,
projection: float4x4,
viewport: CGRect) -> float3 {
// viewport to clip: subtract viewport origin, divide by size,
// scale/offset from 0...1 to -1...1 coordinate space
let clip = (screenPoint - float3(viewport.x, viewport.y, 1.0))
/ float3(viewport.width, viewport.height, 1.0)
* float3(2) - float3(1)
// apply the reverse of the model-view-projection transform
let inversePM = (projection * modelView).inverse
let result = inversePM * float4(clip.x, clip.y, clip.z, 1.0)
return float3(result.x, result.y, result.z) / result.w // perspective divide
}
Now, to use it... The modelView matrix you pass to this function is the inverse of ARCamera.transform, and you can also get projectionMatrix directly from ARCamera. So, if you're grabbing a 2D position at one point in time, grab the camera matrices then, too, so that you can work backward to 3D as of that time.
There's still the issue of that "Z depth hint" I mentioned: when the renderer projects 3D to 2D it loses information (one of those D's, actually). So you have to recover or guess that information when you convert back to 3D — the screenPoint you pass in to the above function is the x and y pixel coordinates, plus a depth value between 0 and 1. Zero is closer to the camera, 1 is farther away. How you make use of that sort of depends on how the rest of your algorithm is designed. (At the very least, you can unproject both Z=0 and Z=1, and you'll get the endpoints of line segment in 3D, with your original point somewhere along that line.)
Of course, whether this can actually be put together with your novel CNN-based approach is another question entirely. But at least you learned some useful 3D graphics math!

Unity Repeat UV Coordinates On Quadtree Shaderforge

As u can see in the image, on larger tiles (n > 1) the texture should be repeated as long as the current rect size.. i don't know how i can achieve this!
FYI, im getting the tile texture id with the alpha value of the vertex color.
Here the shader im using..
[UPDATE]
Thanks for clarifying the uv coordinates, unfortunately that doesn't answer my question. Take a look at the following pixture...
Your shader is fine; it's actually the vertex UVs that are the problem:
So for all rectangles the uv coordinates are as following [0, 0] / [0, rect.height] / [rect.width, 0] / [rect.width, rect.height]. So the uvs are going beyond 1
Your shader is designed to support the standard UV space, in which case you should replace rect.width and rect.height with 1.
By using UV coords greater than one, you're effectively asking for texels outside of the specified texture. When used with a texture atlas, that means you're asking for texels outside of the specified tile -- in this case, those happen to be white, and that's what you're seeing in the rendered output.
Tiling with an atlas texture
Updating because I missed an important detail: you want a tiling material.
Usually, UVs interpolate linearly:
For tiling, you essentially want more of a "sawtooth" output:
For a non-atlas texture, you can adjust material scale/wrap settings and call it done. For an atlas texture, it's possible but you'll end up with a shader and/or geometry that aren't quite standard.
The "most standard" solution would be if your larger quads are on a separate mesh from the smaller ones:
Add a float material param named uv_scale or some such
Add a Multiply node that scales incoming UVs by uv_scale
Pass output from that into a Frac node
Pass output from that into the UV Tile node
Pseudocode is roughly: uv = frac(uv * uv_scale)
If you need all of your quads to be in the same mesh, you end up needing non-standard geometry:
Change your UVs again (going back to rect.width and rect.height)
Add a Frac node before the UV Tile node
This is a simpler shader change, but has the downside that your geometry will no longer be cleanly supported in other shaders.
Thanks rutter!
i've implemented your solution into my shader and now it works perfectly!
so for everyone looking for this here is the shader im using now
Cheers, M

Atlas UV map vs. Local UV map

I want glow on my sprites using the UV coordinates, but the problem is, if the sprite originates from an atlas created by Unity's sprite packer, then the UV aren't normalized from 0 to 1, but from and to two arbitrary values. How do I normalize UV data for a single sprite that resides in an atlas? Am I required to parse additional information into the shader or should I already have the necessary information to do this process? The image below describes the situation:
The hand to the left is a sprite not from an atlas. The hand on the right is a sprite from an atlas. I want the right hand to look the same as the hand on the left.
I am not that familiar with shaders yet, so I am reliant to using shaderforge. I am using the following shaderforge layout:
You probably already know this, but the fundamental problem is the output of your "UV Coords" node. The other nodes in your shader are expecting normalized UVs ranging from 0 to 1, but that's not what you're getting when you use the texture atlas.
I can think of two ways to solve that. They're both viable, so I'd recommend trying whichever one fits more cleanly into your workflow.
Add a second UV channel
It's easy to treat UV0 as the only UV channel, but for certain techniques it can be helpful to add multiple UV coords to each vertex.
As an example, lightmapping is a popular feature where each model has its own individual textures (diffuse/normal/etc), but each scene has a pre-baked lightmap texture that is shared between multiple models -- sort of like an atlas for lighting information. UVs for these will not match, so the lightmap UVs are stored on a second channel (UV1).
In a similar fashion, you could use UV0 for atlas UVs and UV1 for local UVs. That gives you clean input on the [0,1] range that you can use for that multiply effect.
Add material params
You could scale and offset the UVs so that they are normalized.
Before rendering, find the min and max values for the mesh's UV coords
Pass those values in as material parameters
Add shader nodes to scale and offset the input UV, such that the range is normalized
For example, you could add min to each UV (offset), then divide by max - min (scale).

How to do texture mapping in openglES? ( Mapping 2D face into a 3D mesh )

I need to convert a 2D face image to a 3D image. For this I thought of using texture mapping with openglES. I tried a lot googling to find some samples I couldnt get any. Can some one guide me to do this?
Input: 2D image
Output : 3D image
Platform : ios
As you know, OpenGL is using 3D or 2D vertices that has a few attributes such as position, normal value, color, texture coordinate. So you should set these values first and you can render.
In ES 2.0 clearly you have to give these values to Vertice Shader
and then you have to transfer two values texture coordinate , normal value to Fragment Shader
and then in Fragment Shader, you can control these values with sampler texture for rendering your face object.
If you work In IOS, It's going to be very help .
Explanation :
http://ofps.oreilly.com/titles/9780596804824/chtextures.html
Source Code :
http://www.developers-life.com/iphone-3d-samples.html