I want to identify only this marked part in my image. (marked in red)
It should be a scale and translation invariant matching algorithm. Which is the best method I can use?
Will the SIFT method be useful here? As I have observed, it outputs many points. I want only this predefined part to be identified always. Maybe as a blob, or the centroid of this part.
Edit: I am trying to use SIFT from VLFeat. This is the code I am using:
Ia = imread ('Img_1.bmp') ; % Img_1 is the entire wheel's image
Ib = imread ('Img_2.png') ; % Img_2 is a small image containing only the part I want to identify in all images.
Ia = im2single(rgb2gray(Ia)) ;
Ib = im2single(rgb2gray(Ib)) ;
[fa, da] = vl_sift(Ia) ;
[fb, db] = vl_sift(Ib) ;
[matches, scores] = vl_ubcmatch(da, db) ;
After this, how can I view the matched images? As it is shown on the website?
Also, will this method serve my purpose of identifying only the small notch?
How should I proceed after this?
Related
I have found a couple areas referring to filling of gaps in binary images in matlab, however I am still struggling. I have written the following code but I cannot get it to work. Here is my binary image:
.
However, what I'm trying to achieve is the following
.
Does anyone know how to do this? I have been trying using imfill but I know I think I need to define boundaries also with the bwlabel function but I dont know how. Any help would be greatly appreciated.
%%Blade_Image_Processing
clc;
clear;
%%Video file information
obj = VideoReader('T9_720p;60p_60mm_f4.MOV');
% Sampling rate - Frames per second
fps = get(obj, 'FrameRate');
dt = 1/fps;
% ----- find image info -----
file_info = get(obj);
image_width = file_info.Width;
image_height = file_info.Height;
% Desired image size
x_range = 1:image_height;
y_range = 1:image_width;
szx = length(x_range);
szy = length(y_range);
%%Get grayscale image
grayscaleimg1 = rgb2gray(read(obj,36));
grayscaleimg = imadjust(grayscaleimg1);
diff_im = medfilt2(grayscaleimg, [3 3]);
t1=60;
t2=170;
range=(diff_im > t1 & diff_im <= t2);
diff_im (range)=255;
diff_im (~range)=0;
% Remove all those pixels less than 300px
diff_im = bwareaopen(diff_im,2000);
%imshow(diff_im)
%imhist(grayscaleimg)
%Fill gaps in binary image
BW2 = imfill(diff_im,'holes');
There are two main problems: desired object has no readily usable distinguishing features, and it touches other object. Second problem could be perhaps cleared with morphological opening/closing (touching object is thin, desired object not, is this always the case?), but first problem remains. If your object touched edge but others didn't or vice versa, you could do something with imfill and subtractions. As it is now, MAYBE something like this would work:
With opening/closing remove connection, so your object is disjoint.
With imfill, remove what is left of this thin horizontal thing.
Then, you can bwlabel and remove everything that touches sides or bottom of the image - in shown case that would leave only your object.
Exact solution depends heavily on what additional constrains are there for your pictures. I believe it is not a one-shot, rather you have more of those pictures and want to correctly find objects on all? You have to check what holds for all pictures, such as if object always touches only something thin or if it always touches only upper edge etc.
I am looking for advice on how to automatically register image stacks acquired at different magnifications. Specially we need to align a small z-stack (~100um) taken of several brain cells in the live brain to a large z stack(~2mm) taken of the fixed brain. We want to be able to find back the cells that were previously imaged and take high resolution images of the staining to identify pre-synaptic inputs. Both the difference in magnification and rotation need to be taken into account as well as possible shrinkage or swelling. We would like an advice on the best way to do this using ImageJ or matlab.
With ImageJ you might want to download the plugin StackReg:
http://bigwww.epfl.ch/thevenaz/stackreg/
or this nice macro code freely available:
http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0053942
With matlab you are looking for imregister, imregtform and/or imwarp. Also look at the optimizer, which is used to define the type of transformation you wish to apply, for example translation, rigid body, etc. Examples on the matlab website are quite helpful. Please ask for more details if it's not clear.
EDIT:
Here is a simple code to register images stored in a sequence in a cell array with Matlab and the functions I mentionned:
clear
clc
dialog_title = 'Select the directory containing the images to be processed'; % select images
folder_name = uigetdir('',dialog_title);
addpath(folder_name);
% select current folder
cd(folder_name);
ImagesToRead = dir('*.tif');
%preallocation
ImageCell = cell(1,length(ImagesToRead));
for i=1 : length(ImagesToRead)
ImageCell{i} = imread(ImagesToRead(i).name);
end
optimizer = registration.optimizer.RegularStepGradientDescent; % here you can modify the default properties of the optimizer to suit your need/to adjust the parameters of registration.
[optimizer, metric] = imregconfig('monomodal'); % for optical microscopy you need the 'monomodal' configuration.
RegisteredCell = cell(1,length(ImagesToRead));
for p = 2:length(ImagesToRead)
moving = ImageCell{p}; % the image you want to register
fixed = ImageCell{p-1}; % the image you are registering with
movingONE = rgb2gray(moving(:,:,:)); % imregtform needs grayscale images
fixedONE = rgb2gray(fixed(:,:,:));
tform = imregtform(movingONE,fixedONE,'translation',optimizer,metric,'DisplayOptimization',true,'PyramidLevels',5);
tform = affine2d(tform.T);
RegisteredCell{p} = imwarp(moving,tform,'OutputView',imref2d(size(fixedONE))); %
end
Now all your images are stored in the cell array 'RegisteredCell' and you can access each of them individually using RegisteredCell{YouImage} for example. Hope that helps!
I am using the VL_SLIC function in MATLAB and I am following the tutorial for the function here: http://www.vlfeat.org/overview/slic.html
This is the code I have written so far:
im = imread('slic_image.jpg');
regionSize = 10 ;
regularizer = 10;
vl_setup;
segments = vl_slic(single(im), regionSize, regularizer);
imshow(segments);
I just get a black image and I am not able to see the segmented image with the superpixels. Is there a way that I can view the result as shown in the webpage?
The reason why is because segments is actually a map that tells you which regions of your image are superpixels. If a pixel in this map belongs to ID k, this means that this pixel belongs to superpixel k. Also, the map is of type uint32 and so when you try doing imshow(segments); it really doesn't show anything meaningful. For that image that is seen on the website, there are 1023 segments given your selected parameters. As such, the map spans from 0 to 1023. If want to see what the segments look like, you could do imshow(segments,[]);. What this will do is that the region with the ID of 1023 will get mapped to white, while the pixels that don't belong to any superpixel region (ID of 0), gets mapped to black. You would actually get something like this:
Not very meaningful! Now, to get what you see on the webpage, you're going to have to do a bit more work. From what I know, VLFeat doesn't have built-in functionality that shows you the results like what is seen on their webpage. As such, you will have to write code to do it yourself. You can do this by following these steps:
Create a map that is true that is the same size as the image
For each superpixel region k:
Create another map that marks true for any pixel belonging to the region k, and false otherwise.
Find the perimeter of this region.
Set these perimeter pixels to false in the map created in Step #1
Repeat Step #2 until we have finished going through all of the regions.
Use this map to mask out all of the pixels in the original image to get what you see in the website.
Let's go through that code now. Below is the setup that you have established:
vl_setup;
im = imread('slic_image.jpg');
regionSize = 10 ;
regularizer = 10 ;
segments = vl_slic(single(im), regionSize, regularizer);
Now let's go through that algorithm that I just mentioned:
perim = true(size(im,1), size(im,2));
for k = 1 : max(segments(:))
regionK = segments == k;
perimK = bwperim(regionK, 8);
perim(perimK) = false;
end
perim = uint8(cat(3,perim,perim,perim));
finalImage = im .* perim;
imshow(finalImage);
We thus get:
Bear in mind that this is not exactly the same as what you get on the website. I simply went to the website and saved that image, then proceeded with the code I just showed you. This is probably because the slic_image.jpg image is not the exact original that was given in their example. There seems to be superpixels in areas where there are some bad quantization artifacts. Also, I'm using a relatively old version of VLFeat - Version 0.9.16. There may have been improvements to the algorithm since then, so I may not be using the most up to date version. In any case, this is something for you that you can start with.
Hope this helps!
I found these lines in vl_demo_slic.m may be useful.
segments = vl_slic(im, regionSize, regularizer, 'verbose') ;
% overaly segmentation
[sx,sy]=vl_grad(double(segments), 'type', 'forward') ;
s = find(sx | sy) ;
imp = im ;
imp([s s+numel(im(:,:,1)) s+2*numel(im(:,:,1))]) = 0 ;
It generates edges from the gradient of the superpixel map (segments).
While I want to take nothing away from ~ rayryeng's ~ beautiful answer.
This could also help.
http://www.vlfeat.org/matlab/demo/vl_demo_slic.html
Available in: toolbox/demo
I have a binary image of several connected components, some large and some small (maybe only 1 pixel). With this I am seeking a way to make each connected component into a checkers pattern, instead of the connected blobs, in an efficient way.
So far I have come up with two ways this could be tried, but they can either produce errors, or be quite unefficient:
I know the entire image and can make a checkers pattern mask to remove 50% of the pixels. This is very fast, but will on average remove 50% of the connected components which are only one pixel in area.
Use bwlabel() in MATLAB/Octave, and loop through each connected component only applying the mask to that component if it is over 1 pixel (while leaving the other components to be considered when the loop gets to them). This can be very inefficient.
Any smart/built-in solutions which could be used?
Code to generate figure
T = zeros(40,40);
T(10:30,10:30) = 1;
chessVec = repmat([1;0],20,1);
T_wanted = (repmat([chessVec circshift(chessVec,1)],1,20).*T);
figure();
subplot(1,2,1);imshow(T);title('Start shape')
subplot(1,2,2);imshow(T_wanted);title('Wanted shape');
Nothing beats blanket checkering for efficiency. All you then need to do is add back the small connected components.
%# create a test image
img = rand(100)>0.8;
img = imclose(img,ones(5));
img = imerode(img,strel('disk',2));
%# get connected components
%# use 4-connect to preserve
%# the diagonal single-pixel lines later
cc = bwconncomp(img,4)
%# create checkerboard using one of Matlab's special matrix functions
chk = invhilb(100,100) < 0;
%# checker original image, add back small stuff
img(chk) = 0;
smallIdx = cellfun(#(x)x<2,cc.PixelIdxList);
img([cc.PixelIdxList{smallIdx}]) = 1;
I have an image containing several specific objects. I would like to detect the positions of those objects in this image. To do that I have some model images containing the objects I would like to detect. These images are well cropped around the object instance I want to detect.
Here is an example:
In this big image,
I would like to detect the object represented in this model image:
Since you originally posted this as a 'gimme-da-codez' question, showing absolutely no effort, I'm not going to give you the code. I will describe the approach in general terms, with hints along the way and it's up to you to figure out the exact code to do it.
Firstly, if you have a template, a larger image, and you want to find instances of that template in the image, always think of cross-correlation. The theory is the same whether you're processing 1D signals (called a matched filter in signal processing) or 2D images.
Cross-correlating an image with a known template gives you a peak wherever the template is an exact match. Look up the function normxcorr2 and understand the example in the documentation.
Once you find the peak, you'll have to account for the offset from the actual location in the original image. The offset is related to the fact that cross-correlating an N point signal with an M point signal results in an N + M -1 point output. This should be clear once you read up on cross-correlation, but you should also look at the example in the doc I mentioned above to get an idea.
Once you do these two, then the rest is trivial and just involves cosmetic dressing up of your result. Here's my result after detecting the object following the above.
Here's a few code hints to get you going. Fill in the rest wherever I have ...
%#read & convert the image
imgCol = imread('http://i.stack.imgur.com/tbnV9.jpg');
imgGray = rgb2gray(img);
obj = rgb2gray(imread('http://i.stack.imgur.com/GkYii.jpg'));
%# cross-correlate and find the offset
corr = normxcorr2(...);
[~,indx] = max(abs(corr(:))); %# Modify for multiple instances (generalize)
[yPeak, xPeak] = ind2sub(...);
corrOffset = [yPeak - ..., xPeak - ...];
%# create a mask
mask = zeros(size(...));
mask(...) = 1;
mask = imdilate(mask,ones(size(...)));
%# plot the above result
h1 = imshow(imgGray);
set(h1,'AlphaData',0.4)
hold on
h2 = imshow(imgCol);
set(h2,'AlphaData',mask)
Here is the answer that I was about to post when the question was closed. I guess it's similar to yoda's answer.
You can try to use normalized cross corelation:
im=rgb2gray(imread('di-5Y01.jpg'));
imObj=rgb2gray(imread('di-FNMJ.jpg'));
score = normxcorr2(imObj,im);
imagesc(score)
The result is: (As you can see, the whitest point corresponds to the position of your object.)
The Mathworks has a classic demo of image registration using the same technique as in #yoda's answer:
Registering an Image Using Normalized Cross-Correlation