I have a binary image of several connected components, some large and some small (maybe only 1 pixel). With this I am seeking a way to make each connected component into a checkers pattern, instead of the connected blobs, in an efficient way.
So far I have come up with two ways this could be tried, but they can either produce errors, or be quite unefficient:
I know the entire image and can make a checkers pattern mask to remove 50% of the pixels. This is very fast, but will on average remove 50% of the connected components which are only one pixel in area.
Use bwlabel() in MATLAB/Octave, and loop through each connected component only applying the mask to that component if it is over 1 pixel (while leaving the other components to be considered when the loop gets to them). This can be very inefficient.
Any smart/built-in solutions which could be used?
Code to generate figure
T = zeros(40,40);
T(10:30,10:30) = 1;
chessVec = repmat([1;0],20,1);
T_wanted = (repmat([chessVec circshift(chessVec,1)],1,20).*T);
figure();
subplot(1,2,1);imshow(T);title('Start shape')
subplot(1,2,2);imshow(T_wanted);title('Wanted shape');
Nothing beats blanket checkering for efficiency. All you then need to do is add back the small connected components.
%# create a test image
img = rand(100)>0.8;
img = imclose(img,ones(5));
img = imerode(img,strel('disk',2));
%# get connected components
%# use 4-connect to preserve
%# the diagonal single-pixel lines later
cc = bwconncomp(img,4)
%# create checkerboard using one of Matlab's special matrix functions
chk = invhilb(100,100) < 0;
%# checker original image, add back small stuff
img(chk) = 0;
smallIdx = cellfun(#(x)x<2,cc.PixelIdxList);
img([cc.PixelIdxList{smallIdx}]) = 1;
Related
I have found a couple areas referring to filling of gaps in binary images in matlab, however I am still struggling. I have written the following code but I cannot get it to work. Here is my binary image:
.
However, what I'm trying to achieve is the following
.
Does anyone know how to do this? I have been trying using imfill but I know I think I need to define boundaries also with the bwlabel function but I dont know how. Any help would be greatly appreciated.
%%Blade_Image_Processing
clc;
clear;
%%Video file information
obj = VideoReader('T9_720p;60p_60mm_f4.MOV');
% Sampling rate - Frames per second
fps = get(obj, 'FrameRate');
dt = 1/fps;
% ----- find image info -----
file_info = get(obj);
image_width = file_info.Width;
image_height = file_info.Height;
% Desired image size
x_range = 1:image_height;
y_range = 1:image_width;
szx = length(x_range);
szy = length(y_range);
%%Get grayscale image
grayscaleimg1 = rgb2gray(read(obj,36));
grayscaleimg = imadjust(grayscaleimg1);
diff_im = medfilt2(grayscaleimg, [3 3]);
t1=60;
t2=170;
range=(diff_im > t1 & diff_im <= t2);
diff_im (range)=255;
diff_im (~range)=0;
% Remove all those pixels less than 300px
diff_im = bwareaopen(diff_im,2000);
%imshow(diff_im)
%imhist(grayscaleimg)
%Fill gaps in binary image
BW2 = imfill(diff_im,'holes');
There are two main problems: desired object has no readily usable distinguishing features, and it touches other object. Second problem could be perhaps cleared with morphological opening/closing (touching object is thin, desired object not, is this always the case?), but first problem remains. If your object touched edge but others didn't or vice versa, you could do something with imfill and subtractions. As it is now, MAYBE something like this would work:
With opening/closing remove connection, so your object is disjoint.
With imfill, remove what is left of this thin horizontal thing.
Then, you can bwlabel and remove everything that touches sides or bottom of the image - in shown case that would leave only your object.
Exact solution depends heavily on what additional constrains are there for your pictures. I believe it is not a one-shot, rather you have more of those pictures and want to correctly find objects on all? You have to check what holds for all pictures, such as if object always touches only something thin or if it always touches only upper edge etc.
After reading two images (a,b), I want to find any object in "b" image that does not exists in the first image "a" and object could be of any shape does not matter, the two images are pictures captured in the same place with the same state of the camera. But could be some differences, I want to have the number of these different objects.
this what have tried so far
i = imread('camera1.jpg');
j = imread('camera4.jpg');
a = im2double(i)
b = im2double(j)
f1= ones(3,3)/9;
i1=imfilter(i,a);
j1=imfilter(j,b);
ed1 = edge(i1);
ed2 = edge(j1);
madBlock = mean2(abs(double(ed1) - double(ed2)))
I think, the simplest way is to align these two images (e.g. in Hugin) and calculate difference diff = |b-a|.
Next step is thresholding: set to zero all pixels in diff that lower than a threshold. After that do median filtering to omit salt/pepper noice and apply connected components search (marching squares method). You will find differences between images.
I want to identify only this marked part in my image. (marked in red)
It should be a scale and translation invariant matching algorithm. Which is the best method I can use?
Will the SIFT method be useful here? As I have observed, it outputs many points. I want only this predefined part to be identified always. Maybe as a blob, or the centroid of this part.
Edit: I am trying to use SIFT from VLFeat. This is the code I am using:
Ia = imread ('Img_1.bmp') ; % Img_1 is the entire wheel's image
Ib = imread ('Img_2.png') ; % Img_2 is a small image containing only the part I want to identify in all images.
Ia = im2single(rgb2gray(Ia)) ;
Ib = im2single(rgb2gray(Ib)) ;
[fa, da] = vl_sift(Ia) ;
[fb, db] = vl_sift(Ib) ;
[matches, scores] = vl_ubcmatch(da, db) ;
After this, how can I view the matched images? As it is shown on the website?
Also, will this method serve my purpose of identifying only the small notch?
How should I proceed after this?
I'm making a GUI in MatLab that asks the user to upload a video file.
Next I want to play it in axes with a fixed window size . However, if the uploaded file is large, Matlab will expand the axes and take over most of my GUI. Is there a way to shrink the image to make it fit the axes?
Does anyone know how to solve this?
Usually Matlab axes are not supposed to change their position if the image is too big.
I can think of two possible problems:
The axes were large from the beginning, but showed small image with margins if the image is small enough
The command of showing the image that you are using is custom and it changes the axes size.
This question is old, but I stumbled across this (looking for something else) so perhaps it will help someone to see what I did.
I wanted to resize pretty large images (1024x 100k-200k pixels) so that my GUI can quickly demonstrate various color operations on a view of these large data sets. I just manually sub-sampled my data as follows (functions below).
Note that this example is an image. To spatially sub-sample a video, I have looped through the video and done something similar in the past on each frame.
[plotWidthPixels, plotHeightPixels] = getPlotAreaPixels(handles.figure1, handles.axes1);
[nSamplesPerLine nLines] = size(iqData);
colInds = decimateToNumber(nLines,plotWidthPixels);
rowInds = decimateToNumber(nSamplesPerLine,plotHeightPixels);
iqDataToPlot = iqData(rowInds,colInds);
First, I got the axis size in pixels:
function [plotWidthPixels, plotHeightPixels] = getPlotAreaPixels(figHandle, axisHandle)
set(figHandle,'Units','pixels')
figSizePix = get(figHandle,'Position');
set(axisHandle,'Units','normalized')
axSizeNorm = get(axisHandle,'Position');
axisSizePix = figSizePix.*axSizeNorm;
plotWidthPixels = ceil(axisSizePix(3)-axisSizePix(1));
plotHeightPixels = ceil(axisSizePix(4)-axisSizePix(2));
Then I used that to decimate the width and height of my image by getting sub-sets of indices that are (crudely approximately) evenly spaced:
function inds = decimateToNumber(lengthOfInitialVector, desiredVectorLength, initialIndex)
if nargin < 3
initialIndex = 1;
end
if (lengthOfInitialVector-initialIndex+1) > desiredVectorLength*2
inds = round(linspace(initialIndex,lengthOfInitialVector,desiredVectorLength));
else
inds = initialIndex:lengthOfInitialVector;
end
I have an image containing several specific objects. I would like to detect the positions of those objects in this image. To do that I have some model images containing the objects I would like to detect. These images are well cropped around the object instance I want to detect.
Here is an example:
In this big image,
I would like to detect the object represented in this model image:
Since you originally posted this as a 'gimme-da-codez' question, showing absolutely no effort, I'm not going to give you the code. I will describe the approach in general terms, with hints along the way and it's up to you to figure out the exact code to do it.
Firstly, if you have a template, a larger image, and you want to find instances of that template in the image, always think of cross-correlation. The theory is the same whether you're processing 1D signals (called a matched filter in signal processing) or 2D images.
Cross-correlating an image with a known template gives you a peak wherever the template is an exact match. Look up the function normxcorr2 and understand the example in the documentation.
Once you find the peak, you'll have to account for the offset from the actual location in the original image. The offset is related to the fact that cross-correlating an N point signal with an M point signal results in an N + M -1 point output. This should be clear once you read up on cross-correlation, but you should also look at the example in the doc I mentioned above to get an idea.
Once you do these two, then the rest is trivial and just involves cosmetic dressing up of your result. Here's my result after detecting the object following the above.
Here's a few code hints to get you going. Fill in the rest wherever I have ...
%#read & convert the image
imgCol = imread('http://i.stack.imgur.com/tbnV9.jpg');
imgGray = rgb2gray(img);
obj = rgb2gray(imread('http://i.stack.imgur.com/GkYii.jpg'));
%# cross-correlate and find the offset
corr = normxcorr2(...);
[~,indx] = max(abs(corr(:))); %# Modify for multiple instances (generalize)
[yPeak, xPeak] = ind2sub(...);
corrOffset = [yPeak - ..., xPeak - ...];
%# create a mask
mask = zeros(size(...));
mask(...) = 1;
mask = imdilate(mask,ones(size(...)));
%# plot the above result
h1 = imshow(imgGray);
set(h1,'AlphaData',0.4)
hold on
h2 = imshow(imgCol);
set(h2,'AlphaData',mask)
Here is the answer that I was about to post when the question was closed. I guess it's similar to yoda's answer.
You can try to use normalized cross corelation:
im=rgb2gray(imread('di-5Y01.jpg'));
imObj=rgb2gray(imread('di-FNMJ.jpg'));
score = normxcorr2(imObj,im);
imagesc(score)
The result is: (As you can see, the whitest point corresponds to the position of your object.)
The Mathworks has a classic demo of image registration using the same technique as in #yoda's answer:
Registering an Image Using Normalized Cross-Correlation