Why no feature is being tracked using KLT-tracking in Matlab? - matlab

I am trying to track some features (extracted using multiscale-harrys detector) between two frames using the Kanade-Lucas-Tomasi (KLT) algorithm using the functions you can find here (Mathworks documentation).
I cannot understand what goes wrong. None of the points can be tracked. I tried making bigger the number of iterations and changing the size of the window around the features but the result is always the same, no feature is tracked.
Is it a problem in the data (images resolution is too low (240x180 pixels))?
Is the problem in the selected features?
These are the two images I am using:
This is my code:
img = single(imread('img.png'));
end_img = single(imread('end_img.png'));
coord_first = [24,21;25,97;29,134;37,25;37,55;37,64;38,94;38,103;40,131;41,139;43,14;44,22;44,54;44,63;46,93;46,101;47,111;49,131;49,140;52,166;55,52;62,151;76,51;78,89;81,151;81,165;83,13;92,165;111,18;111,96;155,42;155,62;155,81;155,100;156,129;163,133;168,126;170,40;170,65;172,26;173,134;174,59;174,84;174,103;174,116;175,73;178,97;186,142;186,149;190,119;190,132;194,75;209,99;210,42;210,66;212,133;212,152;215,61;215,79;218,119];
% display of the target image and all the features I want to track
figure
imshow(img,[]),
colormap gray
hold on
plot(coord_first(:,1), coord_first(:,2), 'r*');
% point tracker creation
% the paramters reported here are the default ones
pointTracker = vision.PointTracker('MaxIterations', 30, 'BlockSize', [31,31]);
% point tracker initialization
initialize(pointTracker,coord_first,img);
% actual tracking
[coord_end, point_validity] = step(pointTracker, end_img);
% display of all the correctly tracked featrures
figure
imshow(end_img,[]),
colormap gray
hold on
plot(coord_end(point_validity,1), coord_end(point_validity,2), 'r*');

Actually I have just solved the problem. Of course the problem was the fact that no point was tracked.
The problem is that the images given in input must have grayscale values in [0, 1] and not in [0, 255] (as I was doing).
There is no specific need to tune any of the parameter once the data are passed the right way (at least in my case with these low resolution grayscale images).

Check the contents of point_validity. If all elements of points_validity are false, then you would not see any points. If that is the case, the next question is why points were not tracked.
For an image of this size, try setting 'NumPyramidLevels' to 1.

Related

Plot true color Sentinel-2A imagery in Matlab

Through a combination of non-matlab/non-native tools (GDAL) as well as native tools (geoimread) I can ingest Sentinel-2A data either a indiviual bands or as an RGB image having employed gdal merge. I'm stuck at a point where using
imshow(I, [])
Produces a black image, with apparently no signal. The range of intensity values in the image are 271 - 4349. I know that there is a good signal in the image because when I do:
bit_depth = 2^15;
I = swapbytes(I);
[I_indexed, color_map] = rgb2ind(I, bit_depth);
I_double = im2double(I_indexed, 'indexed');
ax1 = figure;
colormap(ax1, color_map);
image(I_double)
i.e. index the image, collect a colormap, set the colormap and then call the image function, I get a likeness of the region I'm exploring (albeit very strangely colored)
I'm currently considering whether I should try:
Find a low-level description of Sentinel-2A data, implement the scaling/correction
Use a toolbox, possibly this one.
Possibly adjust ouput settings in one of the earlier steps involving GDAL
Comments or suggestions are greatly appreciated.
A basic scaling scheme is:
% convert image to double
I_double = im2double(I);
% scaling
max_intensity = max(I_double(:));
min_intensity = min(I_double(:));
range_intensity = max_intensity - min_intensity;
I_scaled = 2^16.*((I_double - min_intensity) ./ range_intensity);
% display
imshow(uint16(I_scaled))
noting the importance of casting to uint16 from double for imshow.
A couple points...
You mention that I is an RGB image (i.e. N-by-M-by-3 data). If this is the case, the [] argument to imshow will have no effect. That only applies automatic scaling of the display for grayscale images.
Given the range of intensity values you list (271 to 4349), I'm guessing you are dealing with a uint16 data type. Since this data type has a maximum value of 65535, your image data only covers about the lower 16th of this range. This is why your image looks practically black. It also explains why you can see the signal with your given code: you apply swapbytes to I before displaying it with image, which in this case will shift values into the higher intensity ranges (e.g. swapbytes(uint16(4349)) gives a value of 64784).
In order to better visualize your data, you'll need to scale it. As a simple test, you'll probably be able to see something appear by just scaling it by 8 (to cover a little more than half of your dynamic range):
imshow(8.*I);

Automatic detection of B/W image against colored background. What should I do when local thresholding doesn't work?

I have an image which consists of a black and white document against a heterogeneous colored background. I need to automatically detect the document in my image.
I have tried Otsu's method and a Local Thresholding method, but neither were successful. Also, edge detectors like Canny and Sobel didn't work.
Can anyone suggest some method to automatically detect the document?
Here's an example starting image:
After using various threshold methods, I was able to get the following output:
What follows is an automated global method for isolating an area of low color saturation (e.g. a b/w page) against a colored background. This may work well as an alternative approach when other approaches based on adaptive thresholding of grayscale-converted images fail.
First, we load an RGB image I, covert from RGB to HSV, and isolate the saturation channel:
I = imread('/path/to/image.jpg');
Ihsv = rgb2hsv(I); % convert image to HSV
Isat = Ihsv(:,:,2); % keep only saturation channel
In general, a good first step when deciding how to proceed with any object detection task is to examine the distribution of pixel values. In this case, these values represent the color saturation levels at each point in our image:
% Visualize saturation value distribution
imhist(Isat); box off
From this histogram, we can see that there appear to be at least 3 distinct peaks. Given that our target is a black and white sheet of paper, we’re looking to isolate saturation values at the lower end of the spectrum. This means we want to find a threshold that separates the lower 1-2 peaks from the higher values.
One way to do this in an automated way is through Gaussian Mixture Modeling (GMM). GMM can be slow, but since you’re processing images offline I assume this is not an issue. We’ll use Matlab’s fitgmdist function here and attempt to fit 3 Gaussians to the saturation image:
% Find threshold for calling ROI using GMM
n_gauss = 3; % number of Gaussians to fit
gmm_opt = statset('MaxIter', 1e3); % max iterations to converge
gmmf = fitgmdist(Isat(:), n_gauss, 'Options', gmm_opt);
Next, we use the GMM fit to classify each pixel and visualize the results of our GMM classification:
% Classify pixels using GMM
gmm_class = cluster(gmmf, Isat(:));
% Plot histogram, colored by class
hold on
bin_edges = linspace(0,1,256);
for j=1:n_gauss, histogram(Isat(gmm_class==j), bin_edges); end
In this example, we can see that the GMM ended up grouping the 2 far left peaks together (blue class) and split the higher values into two classes (yellow and red). Note: your colors might be different, since GMM is sensitive to random initial conditions. For our use here, this is probably fine, but we can check that the blue class does in fact capture the object we’d like to isolate by visualizing the image, with pixels colored by class:
% Visualize classes as image
im_class = reshape(gmm_class ,size(Isat));
imagesc(im_class); axis image off
So it seems like our GMM segmentation on saturation values gets us in the right ballpark - grouping the document pixels (blue) together. But notice that we still have two problems to fix. First, the big bar across the bottom is also included in the same class with the document. Second, the text printed on the page is not being included in the document class. But don't worry, we can fix these problems by applying some filters on the GMM-grouped image.
First, we’ll isolate the class we want, then do some morphological operations to low-pass filter and fill gaps in the objects.
Isat_bw = im_class == find(gmmf.mu == min(gmmf.mu)); %isolate desired class
opened = imopen(Isat_bw, strel('disk',3)); % morph open
closed = imclose(Isat_bw, strel('disk',50)); % morph close
imshow(closed)
Next, we’ll use a size filter to isolate the document ROI from the big object at the bottom. I’ll assume that your document will never fill the entire width of the image and that any solid objects bigger than the sheet of paper are not wanted. We can use the regionprops function to give us statistics about the objects we detect and, in this case, we’ll just return the objects’ major axis length and corresponding pixels:
% Size filtering
props = regionprops(closed,'PixelIdxList','MajorAxisLength');
[~,ridx] = min([props.MajorAxisLength]);
output_im = zeros(numel(closed),1);
output_im(props(ridx).PixelIdxList) = 1;
output_im = reshape(output_im, size(closed));
% Display final mask
imshow(output_im)
Finally, we are left with output_im - a binary mask for a single solid object corresponding to the document. If this particular size filtering rule doesn’t work well on your other images, it should be possible to find a set of values for other features reported by regionprops (e.g. total area, minor axis length, etc.) that give reliable results.
A side-by-side comparison of the original and the final masked image shows that this approach produces pretty good results for your sample image, but some of the parameters (like the size exclusion rules) may need to be tuned if results for other images aren't quite as nice.
% Display final image
image([I I.*uint8(output_im)]); axis image; axis off
One final note: be aware that the GMM algorithm is sensitive to random initial conditions, and therefore might randomly fail, or produce undesirable results. Because of this, it's important to have some kind of quality control measures in place to ensure that these random failures are detected. One possibility is to use the posterior probabilities of the GMM model to form some kind of criteria for rejecting a certain fit, but that’s beyond the scope of this answer.

MATLAB axes colors change when writing a GIF

Relevant Information: MATLAB R2015b, Mac
I am currently trying to write a GIF from a series of datasets (txt files) I took. I have no trouble writing the GIF file, however, when played back (in PowerPoint) or previewed in the OS X Finder, the axes labels change in color. In addition to the color change, I receive this warning:
Warning: Image data contains values that are out of range. Out of range values will be given the nearest valid value.
Currently, I grab all the data files in the directory, plot them, grab each frame, and put them into a GIF. Here is my code:
%Create MATLAB movie from plots
clearvars
%addpath('C:\Users\ucmuser\Documents\MATLAB')
filename='cooldown_movie.gif';
ext_in='txt';
[~,listing]=f_search(ext_in);
[r,~]=size(listing);
listing=natsortfiles(listing);
F=figure;
%r=20 is used for quick debugging (original is 460 files).
r=20;
% y=linspace(1,2*pi,100);
% x=linspace(1,100,100);
%C(1,1,1,r)=0;
for i=1:r
A=dlmread(listing{i});
listing{i}=strrep(listing{i},'_','\_');
x=A(1,:); %X Array
y=A(2,:); %Y Array
plot(x./1E9,y.*1E3,'-','LineWidth',1.2,...
'Color',[0.8500 0.3250 0.0980]);
grid off
xlabel('Frequency (GHz)','FontSize',18,'FontWeight','bold')
ylabel('Voltage (mV)','FontSize',18,'FontWeight','bold')
title('Cooldown Movie','FontSize',24,'FontWeight','bold')
G(i)=getframe(gcf);
drawnow
frame = G(i);
% im = frame2im(frame);
[C(:,:,1,i),map] = rgb2ind(frame.cdata,256,'nodither');
% if r == 1;
% imwrite(C,map,filename,'gif','LoopCount',Inf,'DelayTime',0);
% else
% imwrite(C,map,filename,'gif','WriteMode','append','DelayTime',0);
% end
end
imwrite(C,map,filename,'gif','LoopCount',Inf,'DelayTime',0);
A sample image is shown below. The axes labels change in color. If I turn on the grid, the effect is even more apparent with the grid changing in grayscale intensity. I've tried setting limits and the effect is still present.
I've found a solution for the time being although I'm not quite sure why it works. I started going through the figure properties and turned off the GraphicsSmoothing property (type F=figure;F.GraphicsSmoothing='off'; before the "for loop"). This seemed to fix the problem as well as clear the original out of range error.
With the GraphicsSmoothing property turned "on" and going through the first 20 data files, I have a maximum indexed color value of 22 via max(max(max(C))). Performing the same code with the GraphicsSmoothing turned "off" yields a maximum value of 4.
I am attaching two images displaying the extreme color differences with the GraphicsSMoothing turned on and off.
GraphicsSmoothing turned On (default)
GraphicsSmoothing turned Off
If someone knew why this is the case, I would greatly appreciate it.
UPDATE: I attempted to use this solution with grid on and the background border turned orange. I am confused on this whole issue.

Image pre-processing and Connected Components

I have a comics image, and I want to extract panels and text balloons from it.
I am using connected component algorithm for this purpose,"bwconncomp".
Knowing that "bwconncomp" requires a binary image as an argument, I am using "im2bw" to binarize my image followed by some morphological filtering.
Ibw = im2bw(I,graythresh(I)); % also tried the default threshold along with all values in the % range [0 1]
Imr = bwmorph(Ibw,'skel'); % also tried 'close' and 'clean'
Icc = bwareaopen(Imr,100);
The problem is that I am getting a drastic change in the number of detected connected components as I change the binarization threshold and some changes upon morphological operation. None of the combinations I have tried gave me all available major objects of the image, there is always some missing.
Can anyone please guide me with that?
You can try detecting text, rather than simply binarizing the image. If you have a recent version of MATLAB with the Computer Vision System Toolbox, you can try this example of text detection.

Making a 2D Matlab figure that contains multiple images which start at a given point (x,y)

Problem
I am trying to make a 2D figure in Matlab which consists of multiple images and a graph with plot data (which I could eventually convert into an image too). For these images and graph, I need to be able to specify where they are located in my cartesion coordinate system.
For my specific case, it is sufficient to be able to "tell" Matlab where the left-bottom corner of the image is.
So for the example above. I would need some "trick" to let "bird1.jpg" start at position (a,b), "bird2.jpg" at position (c,d) and my plot at position (e,f) in one Matlab figure.
Solution to problem
Thanks to chappjc I was able to find a solution for my problem. Here is the code I used such that other people can use it in the future too.
figure_color = [.741 .717 .42];
axe_color = [1 1 1];
Screen.fig = figure('units','pixels',...
'name','Parallel projection',...
'menubar','none',...
'numbertitle','off',...
'position',[100 100 650 720],...
'color',figure_color,...
'busyaction','cancel',...
'renderer','opengl');
Screen.axes = axes('units','pix',...
'position',[420 460 200 200],... % (420,460) is the position of the first image
'ycolor',axe_color,...
'xcolor',axe_color,...
'color',axe_color,...
'xtick',[],'ytick',[],...
'xlim',[-.1 7.1],...
'ylim',[-.1 7.1],...
'visible','On');
Screen.img = imshow(phantom);
Screen.axes2 = axes('units','pix',...
'position',[0 0 200 200],... % (0,0) is the position of the second image
'ycolor',axe_color,...
'xcolor',axe_color,...
'color',axe_color,...
'xtick',[],'ytick',[],...
'xlim',[-.1 7.1],...
'ylim',[-.1 7.1],...
'visible','On');
Screen.img2 = imshow(phantom);
Basically what I do is first creating a (big) figure, and then create a first axe at a certain position in this big picture, and make it the default axe. In this axe I display my first image (made with the phantom function). After that I make a new axe at a another position and make it again the default axe. After I have done that, I place an image there too (the same picture, but you can also use another one if you want). You can also use handles which is the more clean method, as chappjc describes.
Positioning axes in a figure
One approach would be to manipulate the Position property of multiple axes in a figure. To make multiple axes in a figure:
hf = figure;
ha0 = axes('parent',hf,'Position',[x0 y0 w0 h0]);
ha1 = axes('parent',hf,'Position',[x1 y1 w1 h1]);
Then display your images and plots into the axes by specifying the handle (i.e. ha0 or ha1). For example: image(img0,'Parent',ha0) or imshow(img1,'parent',ha1).
Single Large Image
Another approach is to make a single large image and simply display it with image/imshow/etc.
First for the plots, you can use getframe followed by frame2im to get an image in a matrix format.
Next, decide what goes into your combined image and compute the largest box required to circumscribe the images (using their origins and sizes find the largest x and y coordinate), which includes the origin presumably. Use this info to make a blank image (e.g. img = zeros(h,w,3) for and RGB image).