I'm currently working on an alignment script which aligns two images very well. Usually, I get a data set which contains over 50 images of cells. I normally calculate a transformation matrix (T) based on fluorescent beads. However, this T-matrix gave rise to polarization in unpolarized cells, indicating that the transformation is not optimal. Therefore I switched to another script, which calculates a T-matrix based on cells and not beads. This new T-matrix aligns almost perfectly for a fraction of the cells, but there is always a portion of the images which aligns not so good.
I would like to continue with the alignment on cells, because this script works much better than the alignment on beads. In order to have optimal T-matrix for each image, I would like to calculate unique T-matrices for each image couple. I'm not very skilled in Matlab so the solution I could think of did not work.
Below you can find the current script. It functions by creating variables of the images I want to align and assign them to im1 and im2 in the script:
function [T] = alim(im1, im2, Tstart)
%ALIM Determines the transformation between the cameras.
im3=im2;
if (nargin>2)
im2=imwarp(im2, Tstart,'OutputView',imref2d(size(im1)));
end
optimizer = registration.optimizer.RegularStepGradientDescent;
optimizer.MaximumIterations=500;
metric = registration.metric.MattesMutualInformation;
T = imregtform(im2, im1, 'affine', optimizer, metric);
if (nargin>2)
T.T=Tstart.T*T.T;
end
figure;
imshowpair(im1,imwarp(im3,T,'OutputView',imref2d(size(im1))));
end
I tried to incorporate a loop which imports all images from the folder sequentially and assign these to im1 and im2. However, the problems that arises is that the type of data changes from uint16 into cell, which can't be used for this type of transformation. One defines in the script the location of the folders 'CAM1' and 'CAM2' and the number of images in these folders ('imnum')
for i:imnum
x{i}=imread(strcat(link,'CAM1\',num2str(i),'.tif'));
y{i}=imread(strcat(link,'CAM2\',num2str(i),'.tif'));
I would like to have your view on this problem and hopefully you can make some suggestions on how I can import the images in a folder in one go and keep the data type uint16. I'm always open for suggestions so if you have other ideas on how to solve my problem, I would love it if you shared them with me. If anything is unclear, please contact me with questions!
With kind regards,
Reinier
x is a cell array, where each element x{i} is a uint16 array. Cell arrays can hold any other datatype, including more cell arrays, and are a great way to wrap collections of objects, especially when their sizes and/or types may differ.
In your case, just call your function like this:
T = alim(x{i}, y{i}, tstart);
Or, even better, put the output matrix into a similar cell:
T{i} = alim(x{i}, y{i}, tstart);
Related
I used the below function to filter an image. Basically it sets coefficients of DCT to 0 except for top-left 8x8 elements, which means it filter out all high frequency part and only left the low frequency part.
function I_out = em_DCT_filter(I_in,N)
I_trim = double(I_in)-128;
MYDCT=dctmtx(N);
dct = #(block_struct)MYDCT*block_struct.data*MYDCT';
B=blockproc(I_trim,[N,N],dct);
mask = zeros(N,N);
mask(1:N/4,1:N/4)= 1;
AnselmMask = #(block_struct)block_struct.data.*mask;
BMask=blockproc(B,[N N],AnselmMask);
InverseDct = #(block_struct)MYDCT'*block_struct.data*MYDCT;
BReversedl = blockproc(BMask,[N N],InverseDct);
I_out= uint8(BReversedl+128);
After processing, an image looks like this:
I need the function removes the details in the image (e.g. patterns on the sweater, shadow on the pants), which it seems working fine. However, the function also makes the image very fuzzy. How can I remove the details, as well as keeping the region structure clear? For example, the sweater/pants region will be more uniform coloured region than before.
You basically applied "Local Low Pass Filter".
No wonder "Fuzzy" look is the result, you removed data in the High Frequency we usually interpret as details and "Sharpness".
What you really should do is remove High Frequency details yet keep large edges in tact.
A good way to do is use something like Anisotropic Diffusion.
By using the optimized parameters you'll be able to achieve the look you're after.
In general those methods are called image abstractions.
Here's a great Open Source code for advanced Anisotropic Diffusion:
https://github.com/RoyiAvital/Fast-Anisotropic-Curvature-Preserving-Smoothing
Work with, if you can contribute, it would be amazing.
I've been working on an image segmentation problem and can't seem to get a good idea for my most recent problem.
This is what I have at the moment:
Click here for image. (This is only a generic example.)
Is there a robust algorithm that can automatically discard the right square as not belonging to the group of the other four squares (that I know should always be stacked more or less on top of each other) ?
It can sometimes be the case, that one of the stacked boxes is not found, so there's a gap or that the bogus box is on the left side.
Your input is greatly appreciated.
If you have a way of producing BW images like your example:
s = regionprops(BW, 'centroid');
centroids = cat(1, s.Centroid);
xpos = centroids(:,1); should then be the x-positions of the boxes.
From here you have multiple ways to go, depending on whether you always have just one separated box and one set of grouped boxes or not. For the "one bogus box far away, rest closely grouped" case (away from Matlab, so this is unchecked) you could even do something as simple as:
d = abs(xpos-median(xpos));
bogusbox = centroids(d==max(d),:);
imshow(BW);
hold on;
plot(bogusbox(1),bogusbox(2),'r*');
Making something that's robust for your actual use case which I am assuming doesn't consist of neat boxes is another matter; as suggested in comments, you need some idea of how close together the positioning of your good boxes is, and how separate the bogus box(es) will be.
For example, you could use other regionprops measurements such as 'BoundingBox' or 'Extrema' and define some sort of measurement of how much the boxes overlap in x relative to each other, then group using that (this could be made to work even if you have multiple stacks in an image).
I am working with MATLAB for a school project. The assignment is to import a matrix file supplied to me, and display it as a new figure using image. Right now, I can make an image with
m1 = load('matrix1.csv'); image(m1)
But the image is rotated to the right. How do I rotate it so the image is presented horizontally rather than vertically?
Your issue is likely arising from the fact that there are different ways of storing data (row-major vs. column-major). In this case, your .csv file clearly is not in the format you are expecting. The easiest thing to do is to simply transpose the matrix containing your data:
m1 = m1';
image(m1);
If there is something crazier going on and this flips it the wrong way (I don't think this should be the case, but you never know), you can try the rotate command: http://www.mathworks.com/help/matlab/ref/rot90.html
I have two CT image . How can I draw multiple ROIs on both image and calculate mean difference between each the corresponding ROIs with matlab ? I've used the 'imrect' or 'imellipse' but this commands creates the Mask which makes the image as binary image then I would have problem with to calculate mean difference .
How to show the images with the ROIs draw on them?
Not very sure about what you want to do with imrect. This is an idea; the way I would do it. You have to get your hands dirty with actual programming instead of GUI, but it's VERY basic stuff, and easy as soon as you understand indexing, which is very nice in MatLab and the thing you should take with you from this answer:
First of you define the size of your ROIs, which can be easily made with a variable
width=20; %or whatever you wish
height=10;
then define the multiple ROIs using their upper left corner for the position
ROI11=Image1(corner1:corner1+width,corner1:corner1+height); %(width and height eventually the other way around, whatever)
ROI12=Image1(corner2:corner2+width,corner2:corner2+height);
%...
ROI21=Image2(corner1:corner1+width,corner1:corner1+height);
ROI22=Image2(corner2:corner2+width,corner2:corner2+height);
%...
and then calculate the mean however you please, like for example:
Mean1=sum(ROI11-ROI21)/length(ROI11(:));
Mean2=sum(ROI11-ROI21)/length(ROI11(:));
%...
or something along those lines.
Give it a try and play a bit with it.
So, I have a 512x512 distorted image, but what I'm trying to do is restore only a 400x400 centrally-positioned subsection of the image while it is still distorted outside of it. How do I go about implementing something like that?
I was thinking to have a for loop within a for loop like
for row = 57:457
for col = 57:457
%some filter in here
end
end
But I'm not quite sure what to do next...
As a general rule, you can do a lot of things in MATLAB without loops using vectorization instead. As discussed in the comments below your question, there are filtering functions included with MATLAB such as medfilt2, wiener2 or imfilter which all work on two-dimensional images directly without the need for any loops.
To restore only the center part of your image, you apply the filter to the full image, store the result in a temporary variable and then copy over the part that you want into your distored image:
tmpimage = medfilt2(distortedimage);
finalimage = distortedimage;
finalimage(57:456,57:456)=tmpimage(57:456,57:456);
Of course if you don't care about edge effects during the reconstruction, you can just call the reconstruction for the part that interests you and avoid the tmpimage:
finalimage = distortedimage;
finalimage(57:456,57:456)=medfilt2(distortedimage(57:456,57:456));
Note how the sizes in an assignment need to match: you can't assign finalimage(57:456,57:456)=medfilt2(distortedimage) since the right-hand-size produces a 512-by-512 matrix which doesn't fit into the 400-by-400 center of finalimage.