How does an image'alpha transfer so much information to other nodes in Unity Shadergraph? - unity3d

I have a image:
The upper part of this image, which alpha value is 1 (or 255 in RGBA)
The lower part of this image, which alpha value is 0.3, I used it for shadow in game.
So When I import it to Unity ShaderGraph as a _MainTex, when I split it alpha, it looks like this:
imported alpha
My first questions is:
"alpha" is actually a VECTOR 1 type in Unity Documention, but as I could see from the preview, there are three colors, black indicates alpha's value 0, hard white for alpha's value 1 and soft white for alpha's value 0.3, how can one single value transfer so much messages?
My first understanding is:
each pixel's alpha value is stored in the images already, the "alpha" in the shadergraph is just
like a global parameter to control them based every pixel.[I dont know if this is correct]
but when I give alpha a smoothstep node, I
am going to set the pixels's alpha under 0.3 to 0, I found it worked like this:
smoothstep added to the alpha, as you can see, 0.3<0.99, so
the translucent of the image is removed!
So here comes my second question:
Since "alpha" in the input works like a global parameter, how does it affect a picture separately?
My second understanding is:
"alpha" is just like an one-dimensional array, it stores transparency likes this:
{1,1,1,0.3,0.3,0.3}
and when it calculated by smoothstep,its value will be changed like this:
{1,1,1,0,0,0}
But it comes to my first question, ALPHA IS A VECTOR1 TYPE, it only has one value to edit
in the node, it can not be an array!
So, How does an image'alpha transfer so much information to other nodes in Unity Shadergraph?
https://docs.unity3d.com/Packages/com.unity.shadergraph#6.9/manual/Data-Types.html
https://docs.unity3d.com/Packages/com.unity.shadergraph#6.9/manual/Smoothstep-Node.html
Someone who can help me really appreciated!

Shaders work in parallel: for any given vertex or pixel you only get data local to this element. Also critically here 'pixel' (or 'fragment') is a screen pixel, not a texel, which relates a texture's pixel.
In this context, the output of the texture node is a single rgba Vector4 (4 scalar values) at the provided coordinate. This is disconnected from how textures are stored: filtering, compression and mipmapping will come into play (and the control over this comes from the sampler, which you can also provide to the node even though it's most of the time implicit).
Smoothstep is a function that can remap a value - a vector (like the rgba output of the tex node), or a scalar (like the alpha) - into another range. More specifically it does it with smoothing both ends of the spectrum so that the slope is 0 at min and max. The linear equivalent is inverse lerp (which doesn't have a built in instruction in hlsl). You can read about the breakdown on the wikipedia page: https://www.wikiwand.com/en/Smoothstep

Related

Change the scale of pixels to non-square dimensions in GEE

I want to export an image in Google Earth Engine and I want the pixel size to match some in-situ plots with dimensions 2mx30m. How can I set the scale parameter to match this diamesions?
What I currently have (for pixel size 30mx30m):
var myimage= sat_image.reduceRegions(my_points, ee.Reducer.first(),30)
print(myimage)
Export.table.toDrive(myimage,
"pts30",
"insitu_points")
Instead of specifying the scale parameter, specify a crsTransform parameter. It is a “a row-major ordering of the 3x2 transform matrix”, as the documentation puts it, which means that you should specify a list of numbers like
crsTransform=[xScaling, 0, 0, 0, yScaling, 0]
Note that these numbers are not the same as the scale parameter. The scale parameter specifies meters of nominal scale per pixel. These number specify distance in the projection's coordinate system per pixel. For example, if the projection's numerical units are degrees, then the crsTransform factors are degrees per pixel. Thus, it is usually a good idea to specify a crs together with crsTransform so that you know what projection you're measuring with.
Also, this may not be the best option for such very non-square pixels. Consider, instead of using reduceRegions on points, converting your points to rectangle features and then using reduceRegion with a mean reducer. This will have a similar effect, but lets you choose the exact shape you want to sample with.
I'm not sure how well either option will work or whether there are further issues to deal with — I haven't done anything like this myself. But I looked around and there is very little discussion crsTransform at all, so I figured it was worth writing up.

Unity3d: How to stretch (or share) a shader across multiple objects

I am tinkering around with cubes trying to build variations of 'block types' (in an effort to get more familiar with Unity's abilities, shaders, editor tools etc).
I have a generic cube:
That I want to add a material/shader.. which I have done (no problem there):
Which looks well enough (for my purposes) when it's just one block, but when I stick them altogether, I don't like the effect; you can see the individual boxes and the shader (which you can't see in the still image) is actually animated water, so when it's animating it looks ... pretty ugly.
(Bad/undesired)
I am trying to STRETCH or share the shader/material across all the selected blocks. See the below example (in this case, I have taken a SINGLE block and stretched it, but that's not keeping with the spirit of having individual blocks, so also not what I want).
(better/more desired)
I have thought the following may help, but they all seem overly complicated (aka I think I'm going about it incorrectly)
Have the individual blocks, but stretch a single plane across them and then apply the material.
I have found examples of programmatically joining meshes, and then apply the material/shader to the single object.
Take a single block and stretch it to the dimensions needed.
Maybe (not sure if I can), but have a plane with the water material applied to it and use the blocks as masks to only display water for those blocks? Not sure how that works...
In the end I am hoping to have the following:
Individual blocks (so I can interact with them.
Shader animations/colors are shared across the shared/connected blocks.
It won't always be a 2x3 grid... it could be diagonal, or contain odd shapes of connected blocks...
(this is all in EDITOR mode).
Any thoughts on how I might approach this?
Phrases you could try searching are "converting from world space to uv space", "transforming uv coordinates", "uv math". UV is the name for coordinates in textures that a shader samples from, and if you take already existing shader code, you can do interesting things by changing the UV(s) it uses. One of those things is letting you "stretch" it.
In your 2x3 cube example you could tell each cube to treat its U value as going from 0 to 0.5 or 0.5 to 1 and the V as going from 0 to 0.33 or 0.33 to 0.67 or 0.67 to 1 depending on where it is instead of each one going from 0 to 1. You could do this by having a property on the shader to tell it where to start the uv (a) and where to end its uv (b), and you lerp from (0,0) - (1,1) to a - b.
My answer to a different question uses some similar logic to that by comparing the world position of the pixel vs a range of world positions to get a UV. The relevant shader code is:
fixed4 colorizedMapUV = (IN.worldPos.xz-_WorldSpaceRange.xy)
/ (_WorldSpaceRange.zw-_WorldSpaceRange.xy);
Another option is to only look at the world position, and completely disregard a notion of where the "corners" of the uv should be. A method called "triplanar mapping" might guide you to a solution that does this

How to have a generator class in shader glsl with amplify shader editor

i want to create a shader that can cover a surface with "circles" from many random positions.
the circles keep growing until all surface covered with them.
here my first try with amplify shader editor.
the problem is i don't know how make this shader that create array of "point maker" with random positions.also i want to controll circles with
c# example:
point_maker = new point_maker[10];
point_maker[1].position = Vector2.one;
point_maker[1].scale = 1;
and etc ...
Heads-up: That's probably not the way to do what you're looking for, as every pixel in your shader would need to loop over all your input points, while each of those pixels will only be covered by one at most. It's a classic case of embracing the benefits of the parallel nature of shaders. (The keyword for me here is 'random', as in 'random looking').
There's 2 distinct problems here: generating circles, and masking them.
I would go onto generating a grid out of your input space (most likely your UV coordinates so I'll assume that from here), by taking the fractional part of the coords scaled by some value: UV (usually) go between 0 and 1, so if you want 100 circles you'd multiply the coord by 10. You now have a grid of 100 pieces of UVs, where you can do something similar to what you have to generate the circle (tip: dot product a vector on itself gives the square distance, which is much cheaper to compute).
You want some randomness, so you need to add some offset to the center of the circle. You need some sort of random number (there might be some in ASE I can't remember, or make one your own - there's plenty of that you look online) that is unique per cell of the grid. To do this you'd input the remainder of your frac() as value to your hash/random method. You also need to limit that offset depending on the radius of the circle so it doesn't touch the sides of the cell. You can overlay more than one layer of circles if you want more coverage as well.
Second step is to figure out if you want to display those circles at all, and for this you could make the drawing conditional to the distance from the center of the circle to an input coordinate you provide to the shader, by some threshold. (it doesn't have to be an 'if' condition per se, it could be clamping the value to the bg color or something)
I'm making a lot of assumptions on what you want to do here, and if you have stronger conditions on the point distribution you might be better off rendering quads to a render texture for example, but that's a whole other topic :)

Region of Interest in nighttime vehicle detection

I am developing a project of detecting vehicles' headlights in night scene. I am working on a demo on MATLAB. My problem is that I need to find region of interest (ROI) to get low computing requirement. I have researched in many papers and they just use a fixed ROI like this one, the upper part is ignored and the bottom is used to analysed later.
However, if the camera is not stable, I think this approach is inappropriate. I want to find a more flexible one, which alternates in each frame. My experiments images are shown here:
If anyone has any idea, plz give me some suggestions.
I would turn the problem around and say that we are looking for headlights
ABOVE a certain line rather than saying that the headlights are below a certain line i.e. the horizon,
Your images have a very high reflection onto the tarmac and we can use that to our advantage. We know that the maximum amount of light in the image is somewhere around the reflection and headlights. We therefore look for the row with the maximum light and use that as our floor. Then look for headlights above this floor.
The idea here is that we look at the profile of the intensities on a row-by-row basis and finding the row with the maximum value.
This will only work with dark images (i.e. night) and where the reflection of the headlights onto the tarmac is large.
It will NOT work with images taking in daylight.
I have written this in Python and OpenCV but I'm sure you can translate it to a language of your choice.
import matplotlib.pylab as pl
import cv2
# Load the image
im = cv2.imread('headlights_at_night2.jpg')
# Convert to grey.
grey_image = cv2.cvtColor(im, cv2.COLOR_BGR2GRAY)
Smooth the image heavily to mask out any local peaks or valleys
We are trying to smooth the headlights and the reflection so that there will be a nice peak. Ideally, the headlights and the reflection would merge into one area
grey_image = cv2.blur(grey_image, (15,15))
Sum the intensities row-by-row
intensity_profile = []
for r in range(0, grey_image.shape[0]):
intensity_profile.append(pl.sum(grey_image[r,:]))
Smooth the profile and convert it to a numpy array for easy handling of the data
window = 10
weights = pl.repeat(1.0, window)/window
profile = pl.convolve(pl.asarray(intensity_profile), weights, 'same')
Find the maximum value of the profile. That represents the y coordinate of the headlights and the reflection area. The heat map on the left show you the distribution. The right graph shows you the total intensity value per row.
We can clearly see that the sum of the intensities has a peak.The y-coordinate is 371 and indicated by a red dot in the heat map and a red dashed line in the graph.
max_value = profile.max()
max_value_location = pl.where(profile==max_value)[0]
horizon = max_value_location
The blue curve in the right-most figure represents the variable profile
The row where we find the maximum value is our floor. We then know that the headlights are above that line. We also know that most of the upper part of the image will be that of the sky and therefore dark.
I display the result below.
I know that the line in both images are on almost the same coordinates but I think that is just a coincidence.
You may try downsampling the image.

Algorithm for "filling in" texture in a 2D image

I recall seeing a paper a while back for an algorithm that could automatically and seamlessly "graft" texture from parts of an image onto another part of an image.
The approach was something along the lines of the following:
You'd build up a databases of small squares of pixels (perhaps 8X8) from the parts of the picture that are present.
You'd then pick an empty pixel (the "destination" for the texture graft) to fill in, and look for one of the squares in your database that most closely matches the surrounding pixels. You'd then color the empty pixel according to the color of the corresponding pixel in the square you find. Then you pick another empty pixel and repeat until there are no empty pixels remaining.
Of course, this is only a vague description because I can't find any references to this algorithm to refresh my memory of the details! Can anyone help?
Sounds a lot like Texture Synthesis by Non-parametric Sampling