Lock image size on Matlab GUI - matlab

I'm making a GUI in MatLab that asks the user to upload a video file.
Next I want to play it in axes with a fixed window size . However, if the uploaded file is large, Matlab will expand the axes and take over most of my GUI. Is there a way to shrink the image to make it fit the axes?
Does anyone know how to solve this?

Usually Matlab axes are not supposed to change their position if the image is too big.
I can think of two possible problems:
The axes were large from the beginning, but showed small image with margins if the image is small enough
The command of showing the image that you are using is custom and it changes the axes size.

This question is old, but I stumbled across this (looking for something else) so perhaps it will help someone to see what I did.
I wanted to resize pretty large images (1024x 100k-200k pixels) so that my GUI can quickly demonstrate various color operations on a view of these large data sets. I just manually sub-sampled my data as follows (functions below).
Note that this example is an image. To spatially sub-sample a video, I have looped through the video and done something similar in the past on each frame.
[plotWidthPixels, plotHeightPixels] = getPlotAreaPixels(handles.figure1, handles.axes1);
[nSamplesPerLine nLines] = size(iqData);
colInds = decimateToNumber(nLines,plotWidthPixels);
rowInds = decimateToNumber(nSamplesPerLine,plotHeightPixels);
iqDataToPlot = iqData(rowInds,colInds);
First, I got the axis size in pixels:
function [plotWidthPixels, plotHeightPixels] = getPlotAreaPixels(figHandle, axisHandle)
set(figHandle,'Units','pixels')
figSizePix = get(figHandle,'Position');
set(axisHandle,'Units','normalized')
axSizeNorm = get(axisHandle,'Position');
axisSizePix = figSizePix.*axSizeNorm;
plotWidthPixels = ceil(axisSizePix(3)-axisSizePix(1));
plotHeightPixels = ceil(axisSizePix(4)-axisSizePix(2));
Then I used that to decimate the width and height of my image by getting sub-sets of indices that are (crudely approximately) evenly spaced:
function inds = decimateToNumber(lengthOfInitialVector, desiredVectorLength, initialIndex)
if nargin < 3
initialIndex = 1;
end
if (lengthOfInitialVector-initialIndex+1) > desiredVectorLength*2
inds = round(linspace(initialIndex,lengthOfInitialVector,desiredVectorLength));
else
inds = initialIndex:lengthOfInitialVector;
end

Related

Filling the gaps in a binary image

I have found a couple areas referring to filling of gaps in binary images in matlab, however I am still struggling. I have written the following code but I cannot get it to work. Here is my binary image:
.
However, what I'm trying to achieve is the following
.
Does anyone know how to do this? I have been trying using imfill but I know I think I need to define boundaries also with the bwlabel function but I dont know how. Any help would be greatly appreciated.
%%Blade_Image_Processing
clc;
clear;
%%Video file information
obj = VideoReader('T9_720p;60p_60mm_f4.MOV');
% Sampling rate - Frames per second
fps = get(obj, 'FrameRate');
dt = 1/fps;
% ----- find image info -----
file_info = get(obj);
image_width = file_info.Width;
image_height = file_info.Height;
% Desired image size
x_range = 1:image_height;
y_range = 1:image_width;
szx = length(x_range);
szy = length(y_range);
%%Get grayscale image
grayscaleimg1 = rgb2gray(read(obj,36));
grayscaleimg = imadjust(grayscaleimg1);
diff_im = medfilt2(grayscaleimg, [3 3]);
t1=60;
t2=170;
range=(diff_im > t1 & diff_im <= t2);
diff_im (range)=255;
diff_im (~range)=0;
% Remove all those pixels less than 300px
diff_im = bwareaopen(diff_im,2000);
%imshow(diff_im)
%imhist(grayscaleimg)
%Fill gaps in binary image
BW2 = imfill(diff_im,'holes');
There are two main problems: desired object has no readily usable distinguishing features, and it touches other object. Second problem could be perhaps cleared with morphological opening/closing (touching object is thin, desired object not, is this always the case?), but first problem remains. If your object touched edge but others didn't or vice versa, you could do something with imfill and subtractions. As it is now, MAYBE something like this would work:
With opening/closing remove connection, so your object is disjoint.
With imfill, remove what is left of this thin horizontal thing.
Then, you can bwlabel and remove everything that touches sides or bottom of the image - in shown case that would leave only your object.
Exact solution depends heavily on what additional constrains are there for your pictures. I believe it is not a one-shot, rather you have more of those pictures and want to correctly find objects on all? You have to check what holds for all pictures, such as if object always touches only something thin or if it always touches only upper edge etc.

How to correlate properly a moving sample in 2 images of different size?

I am currently recording on a single camera the images, one aside of the other one, of the same sample out of a microscope.
I have 2 issues with that, and I figured out that in post procesing with Matlab I could arrange these questions.
-First, the 2 images on the camera are supposed to have the same pixel size, or one is just a litle bigger than the other one, probably because of optical pathways. What is the adapted Matlab function or way to correlate the two images so they will have exactly the same pixel size in X and Y ?
Two images on same camera , one bigger or smaller compared to the other one
-Secondly, my sample is moving a litle during the recording ( while still staying in my field of view of course ). To make my analysis easier, it would be suitable that I could correct the images so the sample remain at the same place as in the first image, to perform calculations on it easier. What would be the adapted Matlab function or way to correct this movement in the image ?
Sample moving in the image on the camera
Sorry for the poor quality of my drawings !
Thank you very much for your advices and help.
First zero-pad the images to a sufficient degree, to get them both to double the size of the bigger one.
size_padding = max(size(fig1),size(fig2));
fig1_pad = padarray(fig1,size_padding-size(fig1),'post');
fig2_pad = padarray(fig2,size_padding-size(fig2),'post');
Assuming the sample is the only feature present in the images, the best way to proceed would be to use the xcorr2() function and find the lag corresponding to the maximum correlation, to get the space shift between the two images:
xc = xcorr2(fig1_pad,fig2_pad);
[max_cc, imax] = max(abs(xc(:)));
[ypeak, xpeak] = ind2sub(size(xc),imax(1));
corr_offset = [ (ypeak-size(fig2_pad,1)) (xpeak-size(fig2_pad,2)) ];
You then use circshift() to shift one of the images using the lag you obtained in the last step.
fig2_shift = circshift(fig2_pad,corr_offset);
You now have two images of the same size, where hopefully the sample is in the same position. If you want to remove the padding zeroes, crop the images to your liking with respect to the center using imcrop().

remove some top, down rows and right, and left some columns of jpg image border using matlab

I have RGB museum JPG Images. most of them have image footnotes on one or more sides, and I'd like to remove them. I do that manually using paint software. now I applied the following matlab code to remove the image footnotes automatically. I get a good result for some images but for others it not remove any border. Please, can any one help me by update this code to apply it for all images?
'rgbIm = im2double(imread('A3.JPG'));
hsv=rgb2hsv(rgbIm);
m = hsv(:,:,2);
foreground = m > 0.06; % value of background
foreground = bwareaopen(foreground, 1000); % or whatever.
labeledImage = bwlabel(foreground);
measurements = regionprops(labeledImage, 'BoundingBox');
ww = measurements.BoundingBox;
croppedImage = imcrop(rgbImage, ww);'
In order to remove the boundaries you could use "imclearborder", where it checks for labelled components at boundaries and clears them. Caution! if the ROI touches the boundary, it may remove. To avoid such instance you can use "imerode" with desired "strel" -( a line or disc) before clearing the borders. The accuracy or generalizing the method to work for all images depends entirely on "threshold" which separates the foreground and background.
More generic method could be - try to extract the properties of footnotes. For instance, If they are just some texts, you can easily remove them by using a edge detection and morphology opening with line structuring element along the cols. (basic property for text detection)
Hope it helps.
I could give you a clear idea or method if you upload the image.

Image registration in imageJ or matlab

I am looking for advice on how to automatically register image stacks acquired at different magnifications. Specially we need to align a small z-stack (~100um) taken of several brain cells in the live brain to a large z stack(~2mm) taken of the fixed brain. We want to be able to find back the cells that were previously imaged and take high resolution images of the staining to identify pre-synaptic inputs. Both the difference in magnification and rotation need to be taken into account as well as possible shrinkage or swelling. We would like an advice on the best way to do this using ImageJ or matlab.
With ImageJ you might want to download the plugin StackReg:
http://bigwww.epfl.ch/thevenaz/stackreg/
or this nice macro code freely available:
http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0053942
With matlab you are looking for imregister, imregtform and/or imwarp. Also look at the optimizer, which is used to define the type of transformation you wish to apply, for example translation, rigid body, etc. Examples on the matlab website are quite helpful. Please ask for more details if it's not clear.
EDIT:
Here is a simple code to register images stored in a sequence in a cell array with Matlab and the functions I mentionned:
clear
clc
dialog_title = 'Select the directory containing the images to be processed'; % select images
folder_name = uigetdir('',dialog_title);
addpath(folder_name);
% select current folder
cd(folder_name);
ImagesToRead = dir('*.tif');
%preallocation
ImageCell = cell(1,length(ImagesToRead));
for i=1 : length(ImagesToRead)
ImageCell{i} = imread(ImagesToRead(i).name);
end
optimizer = registration.optimizer.RegularStepGradientDescent; % here you can modify the default properties of the optimizer to suit your need/to adjust the parameters of registration.
[optimizer, metric] = imregconfig('monomodal'); % for optical microscopy you need the 'monomodal' configuration.
RegisteredCell = cell(1,length(ImagesToRead));
for p = 2:length(ImagesToRead)
moving = ImageCell{p}; % the image you want to register
fixed = ImageCell{p-1}; % the image you are registering with
movingONE = rgb2gray(moving(:,:,:)); % imregtform needs grayscale images
fixedONE = rgb2gray(fixed(:,:,:));
tform = imregtform(movingONE,fixedONE,'translation',optimizer,metric,'DisplayOptimization',true,'PyramidLevels',5);
tform = affine2d(tform.T);
RegisteredCell{p} = imwarp(moving,tform,'OutputView',imref2d(size(fixedONE))); %
end
Now all your images are stored in the cell array 'RegisteredCell' and you can access each of them individually using RegisteredCell{YouImage} for example. Hope that helps!

How to fix the edges in the image

I have got a result as shown in the following image. As you can see, there are some edges which are not all straight. I want this image to be similar to this one (I'm not sure why the grey shade appears. Maybe because I manually extracted it?). But, the main thing here is to be similar to the white edges. I tried using morphological operations, but with not much improvements.
Any ideas how to fix this issue?
Thanks.
I loaded your data into a variable called "toBeSolved."
rawData1 = importdata('to be solved.JPG');
[~,name] = fileparts('to be solved.JPG');
newData1.(genvarname(name)) = rawData1;
% Create new variables in the base workspace from those fields.
vars = fieldnames(newData1);
for i = 1:length(vars)
assignin('base', vars{i}, newData1.(vars{i}));
end
Now this is an indexed image so there are 3 frames, as can be seen from:
>> size(toBeSolved)
ans =
452 440 3
The data content of each frame appears to be identical, so maybe all you care about is the grayscale information from 1-frame? If thats the case lets just take the first frame:
data1 = im2double(toBeSolved(:,:,1));
And then normalize the data to the max value in the image:
data1 = data1 / max(data1(:));
Now take a look at a mesh view and we see that, as expected, there is significant noise and corruption around the edges:
The appearance about the edges suggests trying a thresholding operation to the data. I experimented with the threshold value and found that 0.13 produces some improvement:
data2 = double(data1 > 0.13);
which gives:
or the grayscale, imshow(data2):
I don't know if this is acceptable to your application, the edges are not perfect, but it does seem improved over what you started with.
By the way, I checked out your "solved" data as well and that appears to also have the same underlying level of noise and edge defects as the "toBeSolved" file, but at least visually, the corruption in that image is harder to see duo to the gray-scale values around the edges.