How can I traverse through pixels? - matlab

Suppose, I have the following image in my hand.
I have marked some pixels of the image as follows,
Now, I have obtained the pixel mask,
How can I traverse through only those pixels that are in that mask?

Given a binary mask, mask, where you want to iterate over all the true pixels in mask, you have at least two options that are both better than the double for loop example.
1) Logical indexing.
I(mask) = 255;
2) Use find.
linearIdx = find(mask);
I(linearIdx) = 255;

The original question:
How can I save only those pixels which I am interested in?
...
Question: Now, in the Step#2, I want to save those pixels in a data-structure (or, whatever) d so that I can apply another function f2(I, d, p,q,r) which does something on that image on the basis of those pixels d.
Create a binary mask
Try using a logical mask of the image to keep track of the pixels of interest.
I'll make up a random image for example here:
randImg = rand(64,64,3);
imgMask = false(size(randImg(:,:,1)));
imgMask(:,[1:4:end]) = true; % take every four columns This would be your d.
% Show what we are talking about
maskImg = zeros(size(randImg));
imgMaskForRGB = repmat(imgMask,1,1,3);
maskImg(imgMaskForRGB) = randImg(imgMaskForRGB);
figure('name','Psychadelic');
subplot(2,1,1);
imagesc(randImg);
title('Random image');
subplot(2,1,2);
imagesc(maskImg);
title('Masked pixels of interest');
Here's what it looks like:
It will be up to you to determine how to store and use the image mask (d in your case) as I am not sure how your functions are written. Hopefully this example will give you an understanding of how it can be done though.
EDIT
You added a second question since I posted:
But, now the problem is, how am I going to traverse through those pixels in K?
Vectrorization
To set all pixels to white:
randImg(imgMaskForRGB) = 255;
In my example, I accessed all of the pixels of interest at the same time with my mask in a vectorized fashion.
I translated my 2D mask into a 3D mask, in order to grab the RGB values of each pixel. That was this code:
maskImg = zeros(size(randImg));
imgMaskForRGB = repmat(imgMask,1,1,3);
Then to access all of these pixels in the image of interest, I used this call:
randImg(imgMaskForRGB)
These are your pixels of interest. If you want to divide these values in 1/2 you could do something like this:
randImg(imgMaskForRGB) = randImg(imgMaskForRGB)/2;
Loops
If you really want to traverse, one pixel at a time, you can always use a double for loop:
for r=1:size(randImg,1)
for c=1:size(randImg,2)
if(imgMask(r,c)) % traverse all the pixels
curPixel = randImg(r,c,:); % grab the ones that are flagged
end
end
end

Okay. I have solved this using the answer of #informaton,
I = imread('gray_bear.png');
J = rgb2gray(imread('marked_bear.png'));
mask = I-J;
for r=1:size(I,1)
for c=1:size(I,2)
if(mask(r,c))
I(r,c) = 255;
end
end
end
imshow(I);

Related

How can I merge multiple images into one and save it on matlab?

I need to merge multiple bitmap of same sizes into one image.That image is basically rotated in different angles and needs to be merged into one whole image. I have tried multiple methods but I come with many issues as I am not able to save that image.
I have tried multiple codes but I actually cannot make sense out of it. What I want to achieve is transparent overlay (not sure) that superimposes two images and you can actually see both one image
figure1 = figure;
ax1 = axes('Parent',figure1);
ax2 = axes('Parent',figure1);
set(ax1,'Visible','off');
set(ax2,'Visible','off');
[a,map,alpha] = imread('E:\training data\0.bmp');
I = imshow(a,'Parent',ax2);
set(I,'AlphaData',alpha);
F = imshow('E:\training data\200.bmp','Parent',ax1);
I just want to superimpose multiple images.
This is my data set:
This is what I want to achieve, i want to add all of the rotated images and achieved into one
This is what I get sadly, I have tried everything
The following does kind of what you want. First load the image, then divide it into 6 equal blocks, and add these. To add the pixel values, I first converted the image to doubles, since uint8 only can go up to pixel values of 255. This would mean that you will just see a large bright spot in the image because you are clipping.
Then add all the blocks. You will see in the output, that the car is not always perfect in the center of the block, so depending on what you are trying to achieve you may want to align the blocks using something like xcorr2.
% load image
A = imread('S82CW.jpg');
fig = figure(1); clf
image(A);
% convert A to double and divide in blocks.
A = double(A);
[img_h, img_w, ~] = size(A);
block_h = img_h/2;
block_w = img_w/3;
% split image in blocks
Asplit = mat2cell(A, repelem(block_h,2), repelem(block_w,3), 3);
% check if splitting makes sense
figure(2); clf
for k = 1:numel(Asplit)
subplot(3,2,k)
image(uint8(Asplit{k}))
end
% superimpose all blocks,
A_super = zeros(size(Asplit{1,1}),'like',Asplit{1,1} ); % init array, make sure same datatype
for k = 1:numel(Asplit)
A_super = A_super + Asplit{k};
end
% divide by max value in A and multiply by 255 to make pixel
% values fit in uint8 (0-255)
A_super_unit8 = uint8(A_super/max(A_super,[],'all')*255);
figure(3); clf;
image(A_super_unit8)

Wrong output for 2d convolution function

As the title says, the output I’m getting out of this function is incorrect. By incorrect I mean that the data are overflowing. How do I normalise the matrix correctly? Currently almost all of the pictures I get are white.
I called the function from another MATLAB file like this:
mask = [3,10,3;0,0,0;-3,-10,-3];
A = imread(“football.jpg”);
B = ConvFun(A,mask);
function [ image ] = ConvFun( img,matrix )
[rows,cols] = size(img); %// Change
%// New - Create a padded matrix that is the same class as the input
new_img = zeros(rows+2,cols+2);
new_img = cast(new_img, class(img));
%// New - Place original image in padded result
new_img(2:end-1,2:end-1) = img;
%// Also create new output image the same size as the padded result
image = zeros(size(new_img));
image = cast(image, class(img));
for i=2:1:rows+1 %// Change
for j=2:1:cols+1 %// Change
value=0;
for g=-1:1:1
for l=-1:1:1
value=value+new_img(i+g,j+l)*matrix(g+2,l+2); %// Change
end
end
image(i,j)=value;
end
end
%// Change
%// Crop the image and remove the extra border pixels
image = image(2:end-1,2:end-1);
imshow(image)
end
In a convolution, if you want the pixel value to stay in the same range
, you need to make the mask add up to 1. Just divide the mask by sum(mask(:)) after defining it. This is however, not the case you are dealing with.
Sometimes that is not the needed. For example if you are doing edge detection (like the kernel you show), you don't really care about maintaining the pixel values. In those cases, the plotting of unnormalized images is more the problem. You can always set the imshow function to auto select display range: imshow(image,[]).
Also, I hope this is homework, as this is the absolutely worst way to code convolution. FFT based convolution is about 100 times faster generally, and MATLAB has an inbuilt for it.

Negative values in Watershed algorithm leading to black image

I'm using the watershed algorithm to try and segment touching nuclei. A typical image may look like:
or this:
I'm trying to apply the watershed algorithm with this code:
show(RGB_img)
%Convert to grayscale image
I = rgb2gray(RGB_img);
%Take structuring element of a disk of size 10, for the morphological transformations
%Attempt to subtract the background from the image: top hat is the
%subtraction of the open image from the original
%Morphological transformation to subtract background noise from the image
%Tophat is the subtraction of an opened image from the original. Remove all
%images smaller than the structuring element of 10
I1 = imtophat(I, strel('disk', 10));
%Increases contrast
I2 = imadjust(I1);
%show(I2,'contrast')
%Assume we have background and foreground and assess thresh as such
level = graythresh(I2);
%Convert to binary image based on graythreshold
BW = im2bw(I2,level);
show(BW,'C');
BW = bwareaopen(BW,8);
show(BW,'C2');
BW = bwdist(BW) <= 1;
show(BW,'joined');
%Complement because we want image to be black and background white
C = ~BW;
%Use distance tranform to find nearest nonzero values from every pixel
D = -bwdist(C);
%Assign Minus infinity values to the values of C inside of the D image
% Modify the image so that the background pixels and the extended maxima
% pixels are forced to be the only local minima in the image (So you could
% hypothetically fill in water on the image
D(C) = -Inf;
%Gets 0 for all watershed lines and integers for each object (basins)
L = watershed(D);
show(L,'L');
%Takes the labels and converts to an RGB (Using hot colormap)
fin = label2rgb(L,'hot','w');
% show(fin,'fin');
im = I;
%Superimpose ridgelines,L has all of them as 0 -> so mark these as 0(black)
im(L==0)=0;
clean_img = L;
show(clean_img)
After C = ~BW; the whole image goes dark. I believe this is because the image pixels are all -inf or some smaller negative number. This is there a way around this and if so what could I change in my code to get this algorithm working? I've experimented a ton and I don't really know what's happening. Any help would be great!
The problem is with your show command. As you said in the comments this uses imshow under the hood. If you try imshow directly you'll see you also get a black image. However, if you call it with appropriate limits:
imshow(clean_img,[min(clean_img(:)), max(clean_img(:))])
you'll see everything you expect to see.
In general I usually prefer imagesc for that reason. imshow makes arbitrary judgements as to what range to represent, and I usually can't be bothered to keep up with it. I think in your case, your end image is uint16 so imshow chooses to represent the range [1, 65025]. Since all your pixel values are below 400, they look black to the naked eye for that range.

Split up a binary image using their white boundaries in MATLAB

I'm trying to read the values in this image into variables using OCR in MATLAB. I'm having trouble doing so, so I tried to split up this image into smaller parts using the white boundary lines then trying to read it, but I dont know how to do this. Any help would be appreciated, thanks.
If the blocks are always delimited by a completely vertical line, you can find where they are by comparing the original image (here transformed from RGB to grayscale to be a single plane) to a matrix that is made of repeats of the first row of the original image only. Since the lines are vertical the intensity of the pixels in the first line will be the same throughout. This generates a binary mask that can be used in conjunction with a quick thresholding to reject those lines that are all black pixels in every row. Then invert this mask and use regionprops to locate the bounding box of each region. Then you can pull these out and do what you like.
If the lines dividing the blocks of text are not always vertical or constant intensity throughout then there's a bit more work that needs to be done to locate the dividing lines, but nothing that's impossible. Some example data would be good to have in that case, though.
img = imread('http://puu.sh/cU3Nj/b020b60f0b.png');
imshow(img);
imgGray = rgb2gray(img);
imgMatch = imgGray == repmat(imgGray(1,:), size(imgGray, 1), 1);
whiteLines = imgMatch & (imgGray > 0);
boxes = regionprops(~whiteLines, 'BoundingBox');
for k = 1:6
subplot(3,2,k)
boxHere = round(boxes(k).BoundingBox);
imshow(img(boxHere(2):(boxHere(2)+boxHere(4)-1), boxHere(1):(boxHere(1)+boxHere(3)-1), :));
end
You can sum along the columns of a binary image corresponding to that input image and find peaks from the sum values. This is precisely achieved in the code here -
img = imread('http://puu.sh/cU3Nj/b020b60f0b.png');
BW = im2bw(img,0.1); %// convert to a binary image with a low threshold
peak_sum_max = 30; %// max of sum of cols to act as threshold to decide as peak
peaks_min_width = 10; %// min distance between peaks i.e. min width of each part
idx = find( sum(BW,1)>=peak_sum_max );
split_idx = [1 idx( [true diff(idx)>peaks_min_width ] )];
split_imgs = arrayfun(#(x) img(:,split_idx(x):split_idx(x+1)),...
1:numel(split_idx)-1,'Uni',0);
%// Display split images
for iter = 1:numel(split_imgs)
figure,imshow(split_imgs{iter})
end
Please note that the final output split_imgs is a cell array with each cell holding image data for each split image.
If you would like to have the split images directly without the need for messing with cell arrays, after you have split_idx, you can do this -
%// Get and display split images
for iter = 1:numel(split_idx)-1
split_img = img(:,split_idx(iter):split_idx(iter+1));
figure,imshow(split_img)
end
There is now a built-in ocr function in the Computer Vision System Toolbox.

Matlab: Matrix manipulation: Change values of pixels around center pixel in binary matrix

I have another problem today:
I have a binary matrix t, in which 1 represents a river channel, 0 represents flood plane and surrounding mountains:
t = Alog>10;
figure
imshow(t)
axis xy
For further calculations, I would like to expand the area of the riverchannel a few pixels in each direction. Generally speaking, I want to have a wider channel displayed in the image, to include a larger region in a later hydraulic model.
Here is my attempt, which does work in certain regions, but in areas where the river runs diagonal to the x-y axis, it does not widen the channel. There seems to be a flow in approaching this, which I cannot quite grasp.
[q,o] = find(t == 1);
qq = zeros(length(q),11);
oo = zeros(length(o),11);
% add +-5 pixel to result
for z=1:length(q)
qq(z,:) = q(z)-5:1:q(z)+5;
oo(z,:) = o(z)-5:1:o(z)+5;
end
% create column vectors
qq = qq(:);
oo = oo(:);
cords = [oo qq]; % [x y]
% remove duplicates
cords = unique(cords,'rows');
% get limits of image
[limy limx] = size(t);
% restrict to x-limits
cords = cords(cords(:,1)>=1,:);
cords = cords(cords(:,1)<=limx,:);
% restrict to y-limits
cords = cords(cords(:,2)>=1,:);
cords = cords(cords(:,2)<=limy,:);
% test image
l = zeros(size(img));
l(sub2ind(size(l), cords(:,2)',cords(:,1)')) = 1;
figure
imshow(l)
axis xy
This is the image I get:
It does widen the channel in some areas, but generally there seems to be a flaw with my approach. When I use the same approach on a diagonal line of pixels, it will not widen the line at all, because it will just create more pairs of [1 1; 2 2; 3 3; etc].
Is there a better approach to this or even something from the realm of image processing?
A blur filter with a set diameter should be working somewhat similar, but I could not find anything helpful...
PS: I wasn't allowed to add the images, although I already have 10 rep, so here are the direct links:
http://imageshack.us/a/img14/3122/channelthin.jpg
http://imageshack.us/a/img819/1787/channelthick.jpg
If you have the image processing toolbox, you should use the imdilate function. This performs the morphological dilation operation. Try the following code:
SE = strel('square',3);
channelThick = imdilate(channelThin,SE);
where SE is a 3x3 square structuring element used to dilate the image stored in channelThin. This will expand the regions in channelThin by one pixel in every direction. To expand more, use a larger structuring element, or multiple iterations.
You may apply morphological operations from image processing. Morphological dilation can be used in your example.
From the image processing toolbox, you can use bwmorth command BW2 = bwmorph(BW,'dilate') or imdilate command IM2 = imdilate(IM,SE).
Where IM is your image and SE is the structuring element. You can set SE = ones(3); to dilate the binary image by "one pixel" - but it can be changed depending on your application. Or you can dilate the image several times with the same structuring element if needed.