I have to import a greyscale image into matlab, convert the pixels it is composed off from unsigned 8 bit ints into doubles, plot a histogram, and finally improve the image quality using the transformation:
v'(x,y) = a(v(x,y)) + b
where v(x,y) is the value of the pixel at point x,y and v'(x,y) is the new value of the pixel.
My primary issue is the value of the two constants, a and b that we have to choose for the transformation. My understanding is that an equalized histogram is desirable for a good image. Other code found on the Internet deals either with using the built-in MATLAB histeq function or calculating probability densities. I've found no reference anywhere to choosing constants for the transformation given above.
I'm wondering if anyone has any tips or ideas on how to go about choosing these constants. I think the rest of my code does what it's supposed to:
image = imread('image.png');
image_of_doubles = double(image);
for i=1:1024
for j=1:806
pixel = image_of_doubles(i,j);
pixel = 0.95*pixel + 5;
image_of_doubles(i,j) = pixel;
end
end
[n_elements,centers] = hist(image_of_doubles(:),20);
bar(centers,n_elements);
xlim([0 255]);
Edit: I also played a bit with different values of constants. The a constant when changed seems to be the one that stretches the histogram; however this only works for a values that are between 0.8 - 1.2 (and it doesn't stretch it enough - it equalizes the histogram only on the range 150 - 290). If you apply an a of let's say 0.5, the histogram is split into two blobs, with a lot of pixels at about 4 or 5 different intensities; again, not equalized in the least.
The operation that you are interested in is known as linear contrast stretching. Basically, you want to multiply each of the intensities by some gain and then shift the intensities by some number so you can manipulate the dynamic range of the histogram. This is not the same as histeq or using the probability density function of the image (a precursor to histogram equalization) to enhance the image. Histogram equalization seeks to flatten the image histogram so that the intensities are more or less equiprobable in being encountered in the image. If you'd like a more authoritative explanation on the topic, please see my answer on how histogram equalization works:
Explanation of the Histogram Equalization function in MATLAB
In any case, choosing the values of a and b is highly image dependent. However, one thing I can suggest if you want this to be adaptive is to use min-max normalization. Basically, you take your histogram and linearly map the intensities so that the lowest input intensity gets mapped to 0, and the highest input intensity gets mapped to the highest value of the associated data type for the image. For example, if your image was uint8, this means that the highest value gets mapped to 255.
Performing min/max normalization is very easy. It is simply the following transformation:
out(x,y) = max_type*(in(x,y) - min(in)) / (max(in) - min(in));
in and out are the input and output images. min(in) and max(in) are the overall minimum and maximum of the input image. max_type is the maximum value associated for the input image type.
For each location (x,y) of the input image, you substitute an input image intensity and run through the above equation to get your output. You can convince yourself that substituting min(in) and max(in) for in(x,y) will give you 0 and max_type respectively, and everything else is linearly scaled in between.
Now, with some algebraic manipulation, you can get this to be of the form out(x,y) = a*in(x,y) + b as mentioned in your problem statement. Concretely:
out(x,y) = max_type*(in(x,y) - min(in)) / (max(in) - min(in));
out(x,y) = (max_type*in(x,y)/(max(in) - min(in)) - (max_type*min(in)) / (max(in) - min(in)) // Distributing max_type in the summation
out(x,y) = (max_type/(max(in) - min(in)))*in(x,y) - (max_type*min(in))/(max(in) - min(in)) // Re-arranging first term
out(x,y) = a*in(x,y) + b
a is simply (max_type/(max(in) - min(in)) and b is -(max_type*min(in))/(max(in) - min(in)).
You would make these a and b and run through your code. BTW, if I may suggest something, please consider vectorizing your code. You can very easily use the equation and operate on the entire image data at once, rather than looping over each value in the image.
Simply put, your code would now look like this:
image = imread('image.png');
image_of_doubles = 0.95*double(image) + 5; %// New
[n_elements,centers] = hist(image_of_doubles(:),20);
bar(centers,n_elements);
.... isn't it much simpler?
Now, modify your code so that the new constants a and b are calculated in the way we discussed:
image = imread('image.png');
%// New - get maximum value for image type
max_type = double(intmax(class(image)));
%// Calculate a and b
min_val = double(min(image(:)));
max_val = double(max(image(:)));
a = max_type / (max_val - min_val);
b = -(max_type*min_val) / (max_val - min_val);
%// Compute transformation
image_of_doubles = a*double(image) + b;
%// Plot the histogram - before and after
figure;
subplot(2,1,1);
[n_elements1,centers1] = hist(double(image(:)),20);
bar(centers1,n_elements1);
%// Change
xlim([0 max_type]);
subplot(2,1,2);
[n_elements2,centers2] = hist(image_of_doubles(:),20);
bar(centers2,n_elements2);
%// Change
xlim([0 max_type]);
%// New - show both images
figure; subplot(1,2,1);
imshow(image);
subplot(1,2,2);
imshow(image_of_doubles,[]);
It's very important that you cast the maximum integer to double because what is returned is not only the integer, but the integer cast to that data type. I've also taken the liberty to change your code so that we can display the histogram before and after the transformation, as well as what the image looks like before and after you transform it.
As an example of seeing this work, let's use the pout.tif image that's part of the image processing toolbox. It looks like this:
You can definitely see that this requires some image contrast enhancement operation because the dynamic range of intensities is quite limited.
This had the appearance of looking washed out.
Using this as the image, this is what the histogram looks like before and after.
We can certainly see the histogram being stretched. Now this is what the images look like:
You can definitely see more detail, even though it's more dark. This tells you that a simple linear scaling isn't enough. You may want to actually use histogram equalization, or use a gamma-law or power law transformation.
Hopefully this will be enough to get you started though. Good luck!
Related
I am looking for an application or a tool which is able for example to extract data from a 2D contour plot like below :
I have seen https://dash-gallery.plotly.host/Portal/ tool or https://plotly.com/dash/ , https://automeris.io/ , but I have test them and this is difficult to extract data (here actually, the data are covariance matrices with ellipses, but I would like to extend it if possible to Markov chains).
If someone could know if there are more efficient tools, mostly from this kind of 2D plot.
I am also opened to commercial applications. I am on MacOS 11.3.
If I am not on the right forum, please let me know it.
UPDATE 1:
I tried to apply the method in Matlab with the script below from this previous post :
%// Import the data:
imdata = importdata('Omega_L_Omega_m.png');
Gray = rgb2gray(imdata.cdata);
colorLim = [-1 1]; %// this should be set manually
%// Get the area of the data:
f = figure('Position',get(0,'ScreenSize'));
imshow(imdata.cdata,'Parent',axes('Parent',f),'InitialMagnification','fit');
%// Get the area of the data:
title('Click with the cross on the most top left area of the *data*')
da_tp_lft = round(getPosition(impoint));
title('Click with the cross on the most bottom right area of the *data*')
da_btm_rgt = round(getPosition(impoint));
dat_area = double(Gray(da_tp_lft(2):da_btm_rgt(2),da_tp_lft(1):da_btm_rgt(1)));
%// Get the area of the colorbar:
title('Click with the cross within the upper most color of the *colorbar*')
ca_tp_lft = round(getPosition(impoint));
title('Click with the cross within the bottom most color of the *colorbar*')
ca_btm_rgt = round(getPosition(impoint));
cmap_area = double(Gray(ca_tp_lft(2):ca_btm_rgt(2),ca_tp_lft(1):ca_btm_rgt(1)));
close(f)
%// Convert the colormap to data:
data = dat_area./max(cmap_area(:)).*range(colorLim)-abs(min(colorLim));
It seems that I get data in data array but I don't know how to exploit it to reproduce the original figure from these data.
Could anyone see how to plot with Matlab this kind of plot with the data I have normally extracted (not sure the Matlab. script has generated all the data for green, orange and blue contours, with each confidence level, that is to say, 68%, 95%, 99.7%) ?
UPDATE 2: I have had first elements of answer on the following link :
partial answer but not fully completed
I cite elements of the approach :
clc
clear all;
imdata = imread('https://www.mathworks.com/matlabcentral/answers/uploaded_files/642495/image.png');
close all;
Gray = rgb2gray(imdata);
yax=sum(conv2(single(Gray),[-1 -1 -1;0 0 0; 1 1 1],'valid'),2);
xax=sum(conv2(single(Gray),[-1 -1 -1;0 0 0; 1 1 1]','valid'),1);
figure(1),subplot(211),plot(xax),subplot(212),plot(yax)
ROIy = find(abs(yax)>1e5);
ROIyinner = find(diff(ROIy)>5);
ROIybounds = ROIy([ROIyinner ROIyinner+1]);
ROIx = find(abs(xax)>1e5);
ROIxinner = find(diff(ROIx)>5);
ROIxbounds = ROIx([ROIxinner ROIxinner+1]);
PLTregion = Gray(ROIybounds(1):ROIybounds(2),ROIxbounds(1):ROIxbounds(2));
PLTregion(PLTregion==255)=nan;
figure(2),imagesc(PLTregion)
[N X]=hist(single(PLTregion(:)),0:255);
figure(3),plot(X,N),set(gca,'yscale','log')
PLTitems = find(N>2000)% %limit "color" of interest to items with >1000 pixels
PLTitems = 1×10
1 67 90 101 129 132 144 167 180 194
PLTvalues = X(PLTitems);
PLTvalues(1)=[]; %ignore black?
%test out region 1
for ind = 1:numel(PLTvalues)
temp = zeros(size(PLTregion));
temp(PLTregion==PLTvalues(ind) | (PLTregion<=50 & PLTregion>10))=255;
% figure(100), imagesc(temp)
temp = bwareaopen(temp,1000);
temp = imfill(temp,'holes');
figure(100), subplot(3,3,ind),imagesc(temp)
figure(101), subplot(3,3,ind),imagesc(single(PLTregion).*temp,[0 255])
end
If someone could know how to improve these first interesting results, this would be fine to mention it.
Restating the problem - My understanding given the different comments and your updates is the following:
someone other than you is in possession of data, which as it happens is 2D data, i.e. an Nx2 matrix;
using the covariance matrix, they are effectively saying something about the joint distribution of these two dimensions, specifically about the variance;
if they assume a Gaussian distribution, as is implied by your comment regarding 68%, 95% and 99.7% for 1sigma, 2sigma and 3sigma, they can draw ellipses which represent the 2D-normal distribution: these are in fact some of the contour lines associated with the 3D "bell" surface;
you have obtained the contour lines in a graph and are trying to obtain the covariance matrix (not the original data...);
you are concerned about the complexity of having to extract the information from each ellipsis.
Partial answer:
It is impossible to recover the original data, I hope you are already aware of that, but in case you are not let's just note that the covariance matrix is a summary statistic of the data, much like the average, and although it says something about the data many different datasets could happen to have the same summary statistic (the same way many different sets of numbers can give you an average of 10).
It is possible to somewhat recover the covariance matrix, i.e. the 3 numbers a, b and c in the matrix [a,b;b,c], though the error in doing so will likely be large because of how imprecise the pixel representation is. Essentially, you will be looking for the dimensions of the two axes, for the variances, as well as the angle of one of the axes, for the covariance.
Unless I am mistaken, under the Gaussian assumption above, you only need to measure this for one of the three ellipses, and then factor by whatever number of sigmas that contour represents. Here you might want to either use the best-defined ellipse, or attempt to use the largest one, which will provide the maximum precision for your measurements (cf. pixelization).
Also, the problem of finding the axes and angle for the ellipse need not be as complex as what it seems like in your first trials: instead of trying to find the contour of the ellipses, find the bounding rectangle.
In order to further simplify this process, if your images are color-coded the way you show, then a filter on blue pixels might be enough in terms of image processing. Then simply take the minimum and maximum (x,y) coordinates in order to obtain the bounding rectangle.
Once the bounding rectangle is obtained, find the equation to your ellipse (that's a question for a math group, but you could start here for example).
Happy filtering!
I'm trying to add poisson noise to a very simple image in MATLAB.
im = ones(256, 256);
noisy = imnoise(im, 'poisson');
After reading this answer, I tried this as well.
im = ones(256, 256);
noisy = imnoise(im2double(im), 'poisson');
to no avail.
I've also tried it with im = zeros(256, 256) but that did nothing as well.
From the documentation of imnoise:
If I is double precision, then input pixel values are interpreted as means of Poisson distributions scaled up by 1e12. For example, if an input pixel has the value 5.5e-12, then the corresponding output pixel will be generated from a Poisson distribution with mean of 5.5 and then scaled back down by 1e12.
This scaling does not happen when the input is uint8:
im = ones(256, 256, 'uint8');
noisy = imnoise(im, 'poisson');
In the case of double-precision, there are two issues:
The scaling of 1e12 seems excessive. This means that the output is taken from a Poisson distribution with a mean of 1e12, then divided by 1e12. The mean will be 1, and the standard deviation will be sqrt(1e-12)=1e-6. That is, the standard deviation will be tiny and the change in intensity will not be visible. If you use format long, MATLAB will show you these values:
>> format long
>> min(noisy(:))
ans =
0.999996115518000
>> max(noisy(:))
ans =
1
This last result (max is 1) indicates that MATLAB clips the results to the [0,1] range, because double-precision images are expected to be in that range. Thus, the distribution returned by your code is not Poisson, it is Poisson clipped at its mean.
So, for floating-point images, scale them appropriately first:
noisy = imnoise(im * 1e-12, 'poisson') * 1e12;
(or use a different factor if that suits you better).
I tried it with an image I took off the internet from here.
I = imread('stick.jpg');
imshow(I)
J = imnoise(I,'poisson');
imshow(J)
If I take your's and change it to Gaussian you will see a difference.
I = ones(256,256);
imshow(I)
J = imnoise(I,'gaussian');
imshow(J)
I haven't read enough on this but my belief would be because the image has homogeneous intensity throughout.
I have 2 images im1 and im2 shown below. Theim2 picture is the same as im1, but the only difference between them is the colors. im1 has RGB ranges of (0-255, 0-255, 0-255) for each color channel while im2 has RGB ranges of (201-255, 126-255, 140-255). My exercise is to reverse the added effects so I can restore im2 to im1 as closely as I can. I have 2 thoughts in mind. The first is to match their histograms so they both have the same colors. I tried it using histeq but it restores only a portion of the image. Is there any way to change im2's histogram to be exactly the same as im1? The second approach was just to copy each pixel value from im1 to im2 but this is wrong since it doesn't restore the original image state. Are there any suggestions to restore the image?
#sepdek below pretty much suggested the method that #NKN alluded to, but I will provide another approach. One more alternative I can suggest is to perform a colour correction based on a least mean squared solution. What this alludes to is that we can assume that transforming a pixel from im2 to im1 requires a linear combination of weights. In other words, given a RGB pixel where its red, green and blue components are shaped into a 3 x 1 vector from the corrupted image (im2), there exists some linear transformation to get its equivalent pixel in the clean image (im1). In other words, we have this relationship:
[R_im1] [R_im2]
[G_im1] = A * [G_im2]
[B_im1] [B_im2]
Y = A * X
A in this case would be a 3 x 3 matrix. This is essentially performing a matrix multiplication to get your output corrected pixel. The input RGB pixel from im2 would be X and the output RGB pixel from im1 would be Y. We can extend this to as many pixels as we want, where pairs of pixels from im1 and im2 would establish columns along Y and X. In general, this would further extend X and Y to 3 x N matrices. To find the matrix A, you would find the least mean squared error solution. I won't get into it, but to find the optimal matrix of A, this requires finding the pseudo-inverse. In our case here, A would thus equal to:
Once you find this matrix A, you would need to take each pixel in your image, shape it so that it becomes a 3 x 1 vector, then multiply A with this vector like the approach above. One thing you're probably asking yourself is what kinds of pixels do I need to grab from both images to make the above approach work? One guideline you must adhere to is that you need to make sure that you're sampling from the same spatial location between the two images. As such, if we were to grab a pixel at... say... row 4, column 9, you need to make sure that both pixels from im1 and im2 come from this same row and same column, and they are placed in the same corresponding columns in X and Y.
Another small caveat with this approach is that you need to be sure that you sample a lot of pixels in the image to get a good solution, and you also need to make sure the spread of your sampling is over the entire image. If we localize the sampling to be within a small area, then you're not getting a good enough distribution of the colours and so the output will not look very nice. It's up to you on how many pixels you choose for the problem, but from experience, you get to a point where the output starts to plateau and you don't see any difference. For demonstration purposes, I chose 2000 pixels in random positions throughout the image.
As such, this is what the code would look like. I use randperm to generate a random permutation from 1 to M where M is the total number of pixels in the image. These generate linear indices so that we can sample from the images and construct our matrices. We then apply the above equation to find A, then take each pixel and apply a matrix multiplication with A to get the output. Without further ado:
close all;
clear all;
im1 = imread('http://i.stack.imgur.com/GtgHU.jpg');
im2 = imread('http://i.stack.imgur.com/wHW50.jpg');
rng(123); %// Set seed for reproducibility
num_colours = 2000;
ind = randperm(numel(im1) / size(im1,3), num_colours);
%// Grab colours from original image
red_out = im1(:,:,1);
green_out = im1(:,:,2);
blue_out = im1(:,:,3);
%// Grab colours from corrupted image
red_in = im2(:,:,1);
green_in = im2(:,:,2);
blue_in = im2(:,:,3);
%// Create 3 x N matrices
X = double([red_in(ind); green_in(ind); blue_in(ind)]);
Y = double([red_out(ind); green_out(ind); blue_out(ind)]);
%// Find A
A = Y*(X.')/(X*X.');
%// Cast im2 to double for precision
im2_double = double(im2);
%// Apply matrix multiplication
out = cast(reshape((A*reshape(permute(im2_double, [3 1 2]), 3, [])).', ...
[size(im2_double,1) size(im2_double,2), 3]), class(im2));
Let's go through this code slowly. I am reading your images directly from StackOverflow. After, I use rng to set the seed so that you can reproduce the same results on your end. Setting the seed is useful because it allows you to reproduce the random pixel selection that I did. We generate those linear indices, then create our 3 x N matrices for both im1 and im2. Finding A is exactly how I described, but you're probably not used to the rdivide / / operator. rdivide finds the inverse on the right side of the operator, then multiplies it with whatever is on the left side. This is a more efficient way of doing the calculation, rather than calculating the inverse of the right side separately, then multiplying with the left when you're done. In fact, MATLAB will give you a warning stating to avoid calculating the inverse separately and that you should the divide operators instead. Next, I cast im2 to double to ensure precision as A will most likely be floating point valued, then go through the multiplication of each pixel with A to compute the result. That last line of code looks pretty intimidating, but if you want to figure out how I derived this, I used this to create vintage style photos which also require a matrix multiplication much like this approach and you can read up about it here: How do I create vintage images in MATLAB? . out stores our final image. After running this code and showing what out looks like, this is what we get:
Now, the output looks completely scrambled, but the colour distribution more or less mimics what the input original image looks like. I have a few explanations on why this is the case:
There is quantization noise. If you take a look at the final image, there is various white spotting all over. This is probably due to the quantization error that is introduced when compressing your image. Pixels that should map to the same colours between the images will have slight variations due to quantization which gives us that spotting
There is more than one colour from im2 that maps to im1. If there is more than one colour from im2 that maps to im1, it is impossible for a linear multiplication with the matrix A to be able to generate more than one kind of colour for im1 given a single pixel in im2. Instead, the least mean-squared solution will try and generate a colour that minimizes the error and give you the best colour possible instead. This is probably way the face and other fine details of the image are obscured because of this exact reason.
The image is noisy. Your im2 is not completely clean. I can also see various spots of salt and pepper noise across all of the channels. One bad thing about this method is that if your image is subject to noise, then this method will not faithfully reconstruct the original image properly. Your image can only be corrupted by a wrong mapping of colours. Should there be any other type of image noise introduced, then this method will definitely not work as you are trying to reconstruct the original image based on a noisy image. There are pixels in the noisy image that were never present in the original image, so you'll have no luck getting it back to the way it was before!
If you want to take a look at the histograms of each channel between the original image and the output image, this is what we get:
The code I used to generate the above figure was:
names = {'Red', 'Green', 'Blue'};
figure;
for idx = 1 : 3
subplot(3,2,2*idx - 1);
imhist(im1(:,:,idx));
title([names{idx} ': Image 1']);
end
for idx = 1 : 3
subplot(3,2,2*idx);
imhist(out(:,:,idx));
title([names{idx} ': Output']);
end
The left side shows the red, green and blue histograms for the original image while the right side shows the same histograms for the reconstructed image. You can see that the general shape more or less mimics the original image, but there are some spikes throughout - most likely attributed to quantization noise and the non-unique mapping between colours of both images.
All in all, this is the best that I could do, but I think that was the whole point of the exercise.... to show that it isn't possible.
For more information on how to perform colour correction, check out Richard Alan Peters' II Digital Image Processing slides on colour correction. This was what I started with, and the derivation of how to calculate A can be found in his slides. Perhaps you can use some of what he talks about in your future work.
Good luck!
It seems that you need a scaling function to map the values of im2 to the values of im1.
This is fairly simple and you could write a scaling function to have it available for any such case.
A basic scaling mapping would work as follows:
out_value = min_output + (in_value - min_input) * (outrange / inrange)
given that there is an input value in_value that is within a range of values inrange=max_input-min_input and the mapping results an output value out_value within a range outrange=max_output-min_output. We also need to take into account the minimum input and output range bounds (min_input and min_output) to have a correct mapping.
See for example the following code for a scaling function:
%
% scale the values of a matrix using a set of limits
% possible ways to use:
% y = scale( x, in_range, out_range) --> ex. y = scale( x, [8 230], [0 255])
% y = scale( x, out_range) --> ex. y = scale( x, [0 1])
%
function y = scale( x, varargin );
if nargin<2,
error([upper(mfilename),':: Syntax: y=',mfilename,'(x[,in_range],out_range)']);
end;
if nargin==2,
inrange=[min(x(:)) max(x(:))]; % compute the limits of the input variable
outrange=varargin{1}; % get the output limits from the arguments
else
inrange=varargin{1}; % get the input limits from the arguments
outrange=varargin{2}; % get the output limits from the arguments
end;
if diff(inrange)==0, % row or column vector matrix or scalar
% just do a clipping...
if x>=outrange(2),
y=outrange(2);
elseif x<=outrange(1),
y=outrange(1);
else
y=x;
end;
else
% actually scale the data
% using: out = min_output + (x-min_input) * (outrange / inrange)
y = outrange(1) + (x-inrange(1))*abs(diff(outrange))/abs(diff(inrange));
end;
This function gets a matrix of values and scales them to a desired range.
In your case it could be used as following (variable img is the scaled im2):
for i=1:size(im1,3), % for each of the input/output image channels
output_range = [min(min(im1(:,:,i))) max(max(im1(:,:,i)))];
img(:,:,i) = scale( im2(:,:,i), output_range);
end;
This way im2 is scaled to the range of values of im1 one channel at a time. Output variable img should be the desired one.
I have computed PCA using the following :
function [signals,V] = pca2(data)
[M,N] = size(data);
data = reshape(data, M*N,1);
% subtract off the mean for each dimension
mn = mean(data,2);
data = bsxfun(#minus, data, mean(data,1));
% construct the matrix Y
Y = data'*data / (M*N-1);
[V D] = eigs(Y, 10); % reduce to 10 dimension
% project the original data
signals = data * V;
My question is:
Is "signals" is the projection of the training set into the eigenspace?
I saw in "Amir Hossein" code that "centered image vectors" that is "data" in the above code needs to be projected into the "facespace" by multiplying in the eigenspace basis's. I don't really understand why is the projection done using centered image vectors? Isn't "signals" enough for classification??
By signals, I assume you mean to ask why are we subtracting the mean from raw vector form of image.
If you think about PCA; it is trying to give you best direction where the data varies most. However, as your images contain pixel probably only positive values those pixels will always be on positive which will mislead, especially, your first and most important eigenvector. You can search more about second moment matrix. But I will share a bad paint image that explains it. Sorry about my drawing.
Please ignore the size of stars;
Stars: Your data
Red Line: Eigenvectors;
As you can easily see in 2D, centering the data can give better direction for your principal component. If you skip this step, your first eigenvector will bias on mean and cause poorer results.
I have a vein image as follow. I use watershed algorithm to extract the skeleton of the vein.
My code: (K is the original image).
level = graythresh(K);
BW = im2bw(K,level);
D = bwdist(~BW);
DL = watershed(D);
bgm = DL == 0;
imshow(bgm);
The result is:
As you can see lot of information is lost. Can anybody help me out? Thanks.
It looks like the lighting is somewhat uneven. This can be corrected using certain morphological operations. The basic idea is to compute an image that represents just the uneven lighting and subtract it, or to divide by it (which also enhances contrast). Because we want to find just the lighting, it is important to use a large enough structuring element, so that the operation examines more global properties rather than local ones.
%# Load image and convert to [0,1].
A = im2double(imread('http://i.stack.imgur.com/TQp1i.png'));
%# Any large (relative to objects) structuring element will do.
%# Try sizes up to about half of the image size.
se = strel('square',32);
%# Removes uneven lighting and enhances contrast.
B = imdivide(A,imclose(A,se));
%# Otsu's method works well now.
C = B > graythresh(B);
D = bwdist(~C);
DL = watershed(D);
imshow(DL==0);
Here are C (left), plus DL==0 (center) and its overlay on the original image:
You are losing information because when you apply im2bw, you are basically converting your uint8 image, where the pixel brightness takes a value from intmin('uint8')==0 to intmax('uint8')==255, into a binary image (where only logical values are used). This is what entails a loss of information that you observed.
If you display the image BW you will see that all the elements of K that had a value greater than the threshold level turn into ones, while those ones below the threshold turn into zeros.
Yes, you'll need to lower your threshold likely (lower than what Otsu's method is giving you). And if the edge map is noisy when you lower the threshold, you should apply a 2-D Gaussian smoothing filter before you lower the threshold. This will move the edges slightly but will clean up noise too, so it's a tradeoff.
The 2-D Gaussian can be applied doing something like
w=gausswin(N,Alpha) % you'll have to play with N and alpha
K = imfilter(K,w,'same','symmetric'); % something like these options
Before you apply the rest of your algorithm.