Python scipy.ndimage.morphology.dilation - png

I have a huge problem concerning png images.
My png is a black/white letter (The letter is white, the background is black).
No colors between them.
My problem is, I want/must use in some way binary_dilation/erosion...
But when I try to do this, I get an image which is white inside and the background is blue??
from scipy.ndimage.morphology import binary_dilation
from scipy.misc import imread, imsave
template = imread("temp.png")/255.0
imsave("Result.png",binary_dilation(template))
I have absolutely no clue why...

Beware of the color channels --- if "temp.png" has it, then template.shape == (nx, ny, 3) or with alpha template.shape == (nx, ny, 4). Binary dilation treats the last dimension as the third spatial dimension, rather than as a color channel, which is not what you usually want. You can do binary_dilation(template[:,:,0]) to enforce a 2-D image operation.

Related

Subtract background in image with Matlab

I have a list of images that i expect to have uneven illumination across the x axis only. The background estimation/model will be calculated in advance but only its shape will be given (so i will not have an empty background image that i can just subtract), I will just have the form/shape of it (for example linear).
Is there a way to subtract the background and get an even illumination of the image with just knowing its shape (and without having an actual background image)? I have attached an image from the Matlab library that was created using a linear background (background=-3*x+0.5). Can someone show me how to go from this example to the original image with just using the background shape?
I am also including the original image.
If you have the shape, and the conditions you state are correct, then you have the full background information. It is, however, not possible to undo the clipping that happened (where the image has values 255, you don't know what the original value was).
This is how you would create a background image given the shape:
img = imread('https://i.stack.imgur.com/0j2Gi.jpg');
img = img(27:284,91:370); % OP posted a screen shot? What is this margin?
x = 0:size(img,2)-1;
background = 0.3*x+0.5;
background = repmat(background,size(img,1),1);
imshow(img-background)
The results looks not exactly like the original image, but most of that can be explained by the clipping.
I replaced your -3*x+0.5 with 0.3*x+0.5 because the -3 makes no sense, and 0.3 makes it so that the background values remain in a meaningful range.
If, on the other hand, you are asking about fitting the linear model to the image data and estimating the exact background to subtract, then the problem is a bit more difficult, but not impossible. You can, for example, make an assumption about intensity uniformity across the x-axis if the illumination had been uniform across the image. You can then take the mean intensity along the x-axis and fit your model to it:
meanintensity = mean(img,1)';
plot(m)
X = [ones(length(meanintensity),1),(0:length(meanintensity)-1)']; % meanintensity = X*params
params = X\meanintensity;
hold on, plot(X*params)
background = X*params;
background = repmat(background',size(img,1),1);
imshow(img-background)

Image segmentation algorithm in MATLAB

I need to implement an image segmentation function in MATLAB based on the principles of the connected components algorithm, but with a few modifications. This is intended for very simple, 2D images, with a background color and some objects in different colors.
The idea is that, taking the image as a matrix, I provide a tool to select the background color (it will vary for every image). Then, when the value of the color of the background of the image is selected, I have to segment all the objects in the image, and the result should be a labeled matrix, of the same size of the image, with 0's for the background, and a different number for each object.
This is a graphic example of what I mean:
I understand the idea of how to do it, but I do not know how to implement it on MATLAB. For each pixel (matrix position) I should mark it as visited and then if the value corresponds to the one of the background, assign 0, if not, assign another value. The objects can be formed by different colors, so in the end, I need to segment groups of adjacent pixels, whatever their color is. Also I have to use 8-connectivity, in order to count the green object of the example image as only one object and not 4 different ones. And also, the objects should be counted from top to bottom, and from left to right.
Is there a simple way of doing this in MATLAB? I know the bwlabel function, but it works for binary images only, so I'd like to adapt it to my case.
once you know the background color, you can easily convert your image into a binary mask of the same size:
bw=img!=bg_color;
Once you have a binary mask you can call bwlavel with 8-connectivity argument as you suggested yourself.
Note: you might want to convert your color image from RGB representation to an indexed image using rgb2ind before processing.

Changes Brightness in color replacement

I am working on replacing certain color in image by User's selected color. I am using OpenCV for color replacement.
Here in short I have described from where I took help and what I got.
How to change a particular color in an image?
I have followed the step or taken basic idea from answer of above link. In correct answer of that link that guy told you only need to change hue for colour replacement.
after that I run into the issue similar like
color replacement in image for iphone application (i.e. It's good code for color replacement for those who are completely beginners)
from that issue I got the idea that I also need to change "Saturation" also.
Now I am running into issues like
"When my source image is too light(i.e. with high brightness) and I am replacing colour with some dark colour then colours looks light in replaced image instead of dark due to that it seems like Replaced colour does not match with colour using that we done replacement"
This happens because I am not considering the brightness in replacement. Here I am stuck what is the formula or idea to change brightness?
Suppose I am replacing the brightness of image with brightness of destination colour then It would look like flat replacemnt and image will lose it's actual shadow or edges.
Edit:
When I am considering the brightness of source(i.e. the pixel to be processed) in replacment then I am facing one issue. let me explain as per scenario of my application.
for example I am changing the colour of car(like whiteAngl explain) after that I am erasing few portion of the newly coloured car. Again I am doing recolour on erased portion but now what happended is colour done after erase and colour before erase doesn't match because both time I am getting different lightness because both time my pixel of to be processed is changed and due to that lightness of colour changed in output. How to overcome this issue
Any help will be appreciated
Without seeing the code you have tried, it's not easy to guess what you have done wrong. To show you with a concrete example how this is done let's change the ugly blue color of this car:
This short python script shows how we can change the color using the HSV color space:
import cv2
orig = cv2.imread("original.jpg")
hsv = cv2.cvtColor(orig, cv2.COLOR_BGR2HSV)
hsv[:,:,0] += 100
bgr = cv2.cvtColor(hsv, cv2.COLOR_HSV2BGR)
cv2.imwrite('changed.jpg', bgr)
and you get:
On wikipedia you see the hue is between 0 to 360 degrees but for the values in OpenCV see the documentation. You see I added 100 to hue of every pixel in the image. I guess you want to change the color of a portion of your image, but probably you get the idea from the above script.
Here is how to get the requested dark red car. First we get the red one:
The dark red one that I tried to keep the metallic feeling in it:
As I said, the equation you use to shift the light of the color depends on the material you want to have for the object. Here I came up with a quick and dirty equation to keep the metallic material of the car. This script produces the above dark red car image from the first light blue car image:
import cv2
orig = cv2.imread("original.jpg")
hls = cv2.cvtColor(orig, cv2.COLOR_BGR2HLS)
hls[:,:,0] += 80 # change color from blue to red, hue
for i in range(1,50): # 50 times reduce lightness
# select indices where lightness is greater than 0 (black) and less than very bright
# 220-i*2 is there to reduce lightness of bright pixel fewer number of times (than 50 times),
# so in the first iteration we don't reduce lightness of pixels which have lightness >= 200, in the second iteration we don't touch pixels with lightness >= 198 and so on
ind = (hls[:,:,1] > 0) & (hls[:,:,1] < (220-i*2))
# from the lightness of the selected pixels we subtract 1, using trick true=1 false=0
# so the selected pixels get darker
hls[:,:,1] -= ind
bgr = cv2.cvtColor(hls, cv2.COLOR_HLS2BGR)
cv2.imwrite('changed.jpg', bgr)
You are right : changing only the hue will not change the brightness at all (or very weakly due to some perceptual effects), and what you want is to change the brightness as well. And as you mentioned, setting the brightness to the target brightness will loose all pixel values (you will only see changes in saturation). So what's next ?
What you can do is to change the pixel's hue plus try to match the average lightness. To do that, just compute the average brightness B of all your pixels to be processed, and then multiply all your brightness values by Bt/B where Bt is the brightness of your target color.
Doing that will both match the hue (due to the first step) and the brightness due to the second step, while preserving the edges (because you only modified the average brightness).
This is a special case of histogram matching, where here, your target histogram has a single value (the target color) so only the mean can be matched in a reasonable way.
And if you're looking for a "credible source" as stated in your bounty request, I am a postdoc at Harvard and will be presenting a paper on color histogram matching at Siggraph this year ;) .

How to convert a black and white photo that was originally colored, back to its original color?

I've converted a colored photo to black and white, and bolded the edges. Now i need to convert it back to its original color with the bolded edges. Is there any function in matlab which allows me to do so?
Once you remove the colour from an image, there is no possible way to automatically put it back. You're basically reducing a set of 16,777,216 colours to a set of 256 - on average each shade of grey has 65,536 equivalent colours, and without the original image there's no way to guess which it could be.
Now, if you were to take the bolded lines from your black-and-white image and paint them on top of the original coloured image, that might end up producing what you're looking for.
If what you are trying to do is to use some filter over the B/W image and then use that with the original color. I suggest you convert your image to a color space with Lightness channel that suits your needs (for example L*a*b* if you need the ligtness to be uniformly distributed regarding human recognition of differences) and apply your filter only over the Lightness channel.

Histogram of image

I have 2 images that look nearly identical. The histogram for one (256 bins) has intensities distributed pretty evenly throughout. The other has intensities at the lowest and highest bin. Why would this be? Then wouldnt it appear binary (thats not the case)?
Think about it this way: Imagine you are taking a histogram of two grayscale images with each pixel represented by a color value 0-255. One image contains pixels that all have gray levels of 128. The second image contains a "checkerboard" pattern (pixels alternate between 0 and 255). If you step back far enough that you no longer see individual pixels, they will appear identical to the naked eye. Your brain "averages" the alternating black and white pixels into a field of gray.
This is what your images are doing. The first image has colors distributed evenly throughout the range and the second image has concentrations of specific colors, but if you calculate an average color for the image (and also for sub-sections within the image) you should see similar values for both.
Never trust in your eyes! They will always lie to you.
Consider this silly example that can be illustrative here. An X-Ray 'photo' is nothing more than black and white dots. But as they are small and mixed along the image, your eyes see different shades of gray.
The same can happen in a digital image, where, although the pixels may have the same size, then can be black and white and 'distributed' in the image in such a way that you see it as having more graylevels. This is called halftone.
Without seeing the images it's hard to say, but it sounds like the second may be slightly clipped.
The difference also could just be a slight difference in contrast in the images that's no visible to the naked eye.