Strange shading behaviour with normal maps in UE4 - unreal-engine4

I've been having some very strange lighting behaviour in Unreal 4. In short, here's what I mean:
Fig 1, First, without any normal mapping on the bricks.
Fig 2, Now with a normal map applied, generated based on the same black-and-white brick texture.
Fig 3, The base pixel normals of the objects in question.
Fig 4, The generated normals which get applied.
Fig 5, The material node setup which produces the issue, as shown in Fig 2
As you can see, the issue occurs when using the generated HeightToNormalSmooth node. As shown, this is not an issue relating to object normals (see Fig 3) or to a badly exported normal map (as there isn't one in the traditional sense), nor is it an issue with the HeightToNormalSmooth node itself (Fig 4 shows that it generates the correct bump normals).
To be clear, the issue here is the fact that using a normal texture at all (this issue occurs across all my materials) causes the positive Y facing faces of an object to turn completely black (or it seems, to become purely reflections-based, as increasing roughness on the material causes the black faces to become less 'shiny' looking).
This is really strange, I've tested with multiple different skylight setups, sun directions, and yet this always happens (even when lit directly), but only on +Y aligned faces.
If anyone can offer insight that would be greatly appreciated.

You're subtracting what looks like 1 from the input that then goes into multiply by 1, if I'm correct. This will, in most cases, make any image return black. This is because in UE4 and many other programs, colors in an image are determined by decimals of Red Green and Blue. These decimals fall in a range of 0 to 1. This means if I wanted to make red, I could use these values- R = 1 G = 0 B = 0. This matters because if R = 0 G = 0 B = 0, the result is black. When you use a multiply node in your example, what you are doing is having UE4 take each pixel of the image you fed into the node (if it was white, R = 1 G = 1 B = 1) and multiply its R, G, and B values by that number. Since zero multiplied by a number equals zero, all the pixels in the image are being set to have values of R = 0, G = 0, and B = 0. thus, all zeros, and you get black.
I also noticed you then multiplied it by one, which in most cases won't do a whole lot, since you're just multiplying the input by 1. If your input is 0, (black), multiplying it by one won't change it, cause 0 * 1 still equals 0.
To fix your issue, try changing the value you subtract from your input to be something smaller than one, say a decimal, such as 0.6 or 0.5

So I've discovered why this was an issue. Turns out there's a little option in the material settings called 'Tangent Space Normal'. This is on by default ('for convenience'), disabling this appears to completely fix the issue with the generated normal maps produced by HeightToNormalSmooth.

Related

Sample gradient range in Unity Shader Graph

I'm trying to put together a simple shader with Unity' Shader graph. The material should appear white, yellow and blue - and static. However in the sample gradient, it's only displaying yellow. If I change the time input to Sine, the gradients blends through the colours. What is going wrong.
Plug an "UV" node in the gradient sampler's "time" input instead of a "Time" node.
First thing to know is that a Gradient goes from 0 to 1.
The 0 is on the left (in white) and the 1 in on the right (in yellow).
I guess that the blue is between approx. 0.25 and 0.3.
On the other hand, you use the Time, which has a constantly evolving value far above 1. Therefore, it is always on the max, meaning plain yellow.
If you use SinTime, the value alternates with a sinusoid between 0 and 1, making the color go from left to right then from right to left.
Additional information: as of today (2019.1.12) it seems that there is a bug in Shader Graph when you use Gradient on a system where the decimal separator is ","
The generated shader is created with a "," instead of "." in functions, messing up the arguments.

Color nodes in Networkx Graph based on specific values

I have looked a bit into the node_color keyword parameter of the nx.draw() method. Here are two different graphs colored using node_colors.
node_colors = [.5,.5,0.,1.]. Colors appear as expected
node_colors = [.9,1.,1.,1.]. Colors do not appear as expected
In the second image, I would expect the color of node 1 to be almost as dark. I assume what is happening is the colormap is getting scaled from the minimum value to the maximum value. For the first example, that's fine, but how can I set the colormap to be scaled from: 0=white, 1=blue every time?
You are correct about the cause of the problem. To fix it, you need to define vmin and vmax.
I believe
nx.draw(G, node_color=[0.9,1.,1.,1.], vmin=0, vmax=1)
will do what you're after (I would need to know what colormap you're using to be sure).
For edges, there are similar parameters: edge_vmin and edge_vmax.

MATLAB colormapping issue with value 0

I already searched the web and stack overflow for answers to this (seemingly simple) question, however could't find the answer:
I am in the progress of writing a cellular automaton in MATLAB. I am using an n*m matrix with values between 0 and 15, from which I make an image using a colormap with 5 grey values (between 0 and 1). See the following code snippet to clarify this:
WIDTH = 100;
HEIGHT = 100;
fields = randi(16,HEIGHT,WIDTH)-1;
% here the grey values 0, 0.25, 0.5, 0.75 and 1 are mapped to the values 1 to 16
cmRow = [1;0.75;0.75;0.5;0.75;0.5;0.5;0.25;0.75;0.5;0.5;0.25;0.5;0.25;0.25;0];
specialGray = [cmRow, cmRow, cmRow];
colormap(specialGray);
image(fields)
Well my problem is, that there is no 0th row in the colormap that MATLAB would use, if a value 0 occurs. As a result, there is always one color missing.
Just using values from 1 to 16 instead of from 0 to 15 is unfortunately not an option, as I heavily rely on these values later in the script.
Is there something obvious, I am missing? Do you have any ideas how to tackle this issue?
Thank you very much!
Best regards,
René
The image function knows two type of color mapping: direct' (the default) and 'scaled'. If you use 'scaled', you can set the scale for the color with thecaxis` function. Thus, the following code should do the trick (but you can of course also transform the values as suggested by Oleg):
image(fields,'CDataMapping','scaled');
colormap(specialGray);
caxis([0 15]);

get(0,'screensize') giving result [0 0 1 1] instead of actual pixels

During MATLAB-sessions,get(0,'screensize') first gives the correct resolution. Later on, the answer will become [0 0 1 1] though. This behaviour will only stop when I restart matlab, it is then the correct value again.
This error always happens when I run a specific part of our programme. It appears to happen after this specific line of code:
set(0,'PointerLocation',[.4*GUI.scrsz(3),.5*GUI.scrsz(4)],'units','normalized');
Even though I managed to isolate the error I can´t to figure out the reason for this behaviour. I am using MATLAB R2010b on Windows 7 64bit.
Please note that I´m not an advanced user of MATLAB, so please forgive me if i overlooked something obvious.Thanks in advance for your help.
The reason is that you set 'units' to 'normalized'. And your screen starts naturally in a corner -> [0 0 ... and fills the whole screen -> ... 1 1] (The first pair defines the position and the second pair height and width)
So the values are correct, just not showing the pixels anymore.
Just set it back to set(0,'units','pixels') after you finished the task before, which needed the normalized units. Or store your screensize at the beginning of your script in a variable to use it later on.
With get(0,...) you are getting default properties and with set(0,...) you change them, thats why it's normal again after restart, because Matlab is setting all values to default with every start, which is in your case 'units','pixels'.

artifacts in processed images

This question is related to my previous post Image Processing Algorithm in Matlab in stackoverflow, which I already got the results that I wanted to.
But now I am facing another problem, and getting some artefacts in the process images. In my original images (stack of 600 images) I can't see any artefacts, please see the original image from finger nail:
But in my 10 processed results I can see these lines:
I really don't know where they come from?
Also if they belong to the camera's sensor why can't I see them in my original images? Any idea?
Edit:
I have added the following code suggested by #Jonas. It reduces the artefact, but does not completely remove them.
%averaging of images
im = D{1}(:,:);
for i = 2:100
im = imadd(im,D{i}(:,:));
end
im = im/100;
imshow(im,[]);
for i=1:100
SD{i}(:,:)=imsubtract(D{i}(:,:),im(:,:))
end
#belisarius has asked for more images, so I am going to upload 4 images from my finger with speckle pattern and 4 images from black background size( 1280x1024 ):
And here is the black background:
Your artifacts are in fact present in your original image, although not visible.
Code in Mathematica:
i = Import#"http://i.stack.imgur.com/5hM3u.png"
EntropyFilter[i, 1]
The lines are faint, but you can see them by binarization with a very low level threshold:
Binarize[i, .001]
As for what is causing them, I can only speculate. I would start tracing from the camera output itself. Also, you may post two or three images "as they come straight from the camera" to allow us some experimenting.
The camera you're using is most likely has a CMOS chip. Since they have independent column (and possibly row) amplifiers, which may have slightly different electronic properties, you can get the signal from one column more amplified than from another.
Depending on the camera, these variability in column intensity can be stable. In that case, you're in luck: Take ~100 dark images (tape something over the lens), average them, and then subtract them from each image before running the analysis. This should make the lines disappear. If the lines do not disappear (or if there are additional lines), use the post-processing scheme proposed by Amro to remove the lines after binarization.
EDIT
Here's how you'd do the background subtraction, assuming that you have taken 100 dark images and stored them in a cell array D with 100 elements:
% take the mean; convert to double for safety reasons
meanImg = mean( double( cat(3,D{:}) ), 3);
% then you cans subtract the mean from the original (non-dark-frame) image
correctedImage = rawImage - meanImg; %(maybe you need to re-cast the meanImg first)
Here is an answer that in opinion will remove the lines more gently than the above mentioned methods:
im = imread('image.png'); % Original image
imFiltered = im; % The filtered image will end up here
imChanged = false(size(im));% To document the filter performance
% 1)
% Compute the histgrams for each column in the lower part of the image
% (where the columns are most clear) and compute the mean and std each
% bin in the histogram.
histograms = hist(double(im(501:520,:)),0:255);
colMean = mean(histograms,2);
colStd = std(histograms,0,2);
% 2)
% Now loop though each gray level above zero and...
for grayLevel = 1:255
% Find the columns where the number of 'graylevel' pixels is larger than
% mean_n_graylevel + 3*std_n_graylevel). - That is columns that contains
% statistically 'many' pixel with the current 'graylevel'.
lineColumns = find(histograms(grayLevel+1,:)>colMean(grayLevel+1)+3*colStd(grayLevel+1));
% Now remove all graylevel pixels in lineColumns in the original image
if(~isempty(lineColumns))
for col = lineColumns
imFiltered(:,col) = im(:,col).*uint8(~(im(:,col)==grayLevel));
imChanged(:,col) = im(:,col)==grayLevel;
end
end
end
imshow(imChanged)
figure,imshow(imFiltered)
Here is the image after filtering
And this shows the pixels affected by the filter
You could use some sort of morphological opening to remove the thin vertical lines:
img = imread('image.png');
SE = strel('line',2,0);
img2 = imdilate(imerode(img,SE),SE);
subplot(121), imshow(img)
subplot(122), imshow(img2)
The structuring element used was:
>> SE.getnhood
ans =
1 1 1
Without really digging into your image processing, I can think of two reasons for this to happen:
The processing introduced these artifacts. This is unlikely, but it's an option. Check your algorithm and your code.
This is a side-effect because your processing reduced the dynamic range of the picture, just like quantization. So in fact, these artifacts may have already been in the picture itself prior to the processing, but they couldn't be noticed because their level was very close to the background level.
As for the source of these artifacts, it might even be the camera itself.
This is a VERY interesting question. I used to deal with this type of problem with live IR imagers (video systems). We actually had algorithms built into the cameras to deal with this problem prior to the user ever seeing or getting their hands on the image. Couple questions:
1) are you dealing with RAW images or are you dealing with already pre-processed grayscale (or RGB) images?
2) what is your ultimate goal with these images. Is the goal to simply get rid of the lines regardless of the quality in the rest of the image that results, or is the point to preserve the absolute best image quality. Are you to perform other processing afterwards?
I agree that those lines are most likely in ALL of your images. There are 2 reasons for those lines ever showing up in an image, one would be in a bright scene where OP AMPs for columns get saturated, thus causing whole columns of your image to get the brightest value camera can output. Another reason could be bad OP AMPs or ADCs (Analog to Digital Converters) themselves (Most likely not an ADC as normally there is essentially 1 ADC for th whole sensor, which would make the whole image bad, not your case). The saturation case is actually much more difficult to deal with (and I don't think this is your problem). Note: Too much saturation on a sensor can cause bad pixels and columns to arise in your sensor (which is why they say never to point your camera at the sun). The bad column problem can be dealt with. Above in another answer, someone had you averaging images. While this may be good to find out where the bad columns (or bad single pixels, or the noise matrix of your sensor) are (and you would have to average pointing the camera at black, white, essentially solid colors), it isn't the correct answer to get rid of them. By the way, what I am explaining with the black and white and averaging, and finding bad pixels, etc... is called calibrating your sensor.
OK, so saying you are able to get this calibration data, then you WILL be able to find out which columns are bad, even single pixels.
If you have this data, one way that you could erase the columns out is to:
for each bad column
for each pixel (x, y) on the bad column
pixel(x, y) = Average(pixel(x+1,y),pixel(x+1,y-1),pixel(x+1,y+1),
pixel(x-1,y),pixel(x-1,y-1),pixel(x-1,y+1))
What this essentially does is replace the bad pixel with a new pixel which is the average of the 6 remaining good pixels around it. The above is an over-simplified version of an algorithm. There are certainly cases where a singly bad pixel could be right next the bad column and shouldn't be used for averaging, or two or three bad columns right next to each other. One could imagine that you would calculate the values for a bad column, then consider that column good in order to move on to the next bad column, etc....
Now, the reason I asked about the RAW versus B/W or RGB. If you were processing a RAW, depending on the build of the sensor itself, it could be that only one sub-pixel (if you will) of the bayer filtered image sensor has the bad OP AMP. If you could detect this, then you wouldn't necessarily have to throw out the other good sub-pixel's data. Secondarily, if you are using an RGB sensor, to take a grayscale photo, and you shot it in RAW, then you may be able to calculate your own grayscale pixels. Many sensors when giving back a grayscale image when using an RGB sensor, will simply pass back the Green pixel as the overall pixel. This is due to the fact that it really serves as the luminescence of an image. This is why most image sensors implement 2 green sub-pixels for every r or g sub-pixel. If this is what they are doing (not ALL sensors do this) then you may have better luck getting rid of just the bad channel column, and performing your own grayscale conversion using.
gray = (0.299*r + 0.587*g + 0.114*b)
Apologies for the long winded answer, but I hope this is still informational to someone :-)
Since you can not see the lines in the original image, they are either there with low intensity difference in comparison with original range of image, or added by your processing algorithm.
The shape of the disturbance hints to the first option... (Unless you have an algorithm that processes each row separately.)
It seems like your sensor's columns are not uniform, try taking a picture without the finger (background only) using the same exposure (and other) settings, then subtracting it from the photo of the finger (prior to other processing). (Make sure the background is uniform before taking both images.)