Hello I have a work on this image:
My objective is to count all the sperm in this image .for this I'm thinking to detect just the lines so it make my work easy. because I am beginner, in this step I am completely lost there are any algorithms can help me to detect lines?? ( I have seen that there are hough transformation and scan line algorithm) I don't which algorithm can help me and if there are others
Here's a piece of code that might help you getting started.
By looking at the image it seems very difficult to label sperms by looking at straight lines and hence using Hough transform won't help a lot.
In the example below I focused on filtering the image and counting the number of blobs. The code is commented and should be easy to understand.
img = imread('d9S3Z.png');
figure, imshow(img)
% convert to binary image
[X,map] = rgb2ind(img,0.0);
img = ind2gray(X,map); % Convert indexed to grayscale
level = graythresh(img); % Compute an appropriate threshold
% or use your own, e.g. level = 0.46
img_bw = im2bw(img,level);% Convert grayscale to binary
% create mask to remove edge interference
mask = zeros(size(img_bw));
mask(2:end-2,2:end-2) = 1;
img_bw(mask<1) = 1;
%invert image
img_inv =1-img_bw;
% find blobs
img_blobs = bwmorph(img_inv,'majority',10);
figure, imshow(img_blobs);
% count blobs
CC = bwconncomp(img_blobs);
num_sperm = CC.NumObjects # sperm count
Related
I need to merge multiple bitmap of same sizes into one image.That image is basically rotated in different angles and needs to be merged into one whole image. I have tried multiple methods but I come with many issues as I am not able to save that image.
I have tried multiple codes but I actually cannot make sense out of it. What I want to achieve is transparent overlay (not sure) that superimposes two images and you can actually see both one image
figure1 = figure;
ax1 = axes('Parent',figure1);
ax2 = axes('Parent',figure1);
set(ax1,'Visible','off');
set(ax2,'Visible','off');
[a,map,alpha] = imread('E:\training data\0.bmp');
I = imshow(a,'Parent',ax2);
set(I,'AlphaData',alpha);
F = imshow('E:\training data\200.bmp','Parent',ax1);
I just want to superimpose multiple images.
This is my data set:
This is what I want to achieve, i want to add all of the rotated images and achieved into one
This is what I get sadly, I have tried everything
The following does kind of what you want. First load the image, then divide it into 6 equal blocks, and add these. To add the pixel values, I first converted the image to doubles, since uint8 only can go up to pixel values of 255. This would mean that you will just see a large bright spot in the image because you are clipping.
Then add all the blocks. You will see in the output, that the car is not always perfect in the center of the block, so depending on what you are trying to achieve you may want to align the blocks using something like xcorr2.
% load image
A = imread('S82CW.jpg');
fig = figure(1); clf
image(A);
% convert A to double and divide in blocks.
A = double(A);
[img_h, img_w, ~] = size(A);
block_h = img_h/2;
block_w = img_w/3;
% split image in blocks
Asplit = mat2cell(A, repelem(block_h,2), repelem(block_w,3), 3);
% check if splitting makes sense
figure(2); clf
for k = 1:numel(Asplit)
subplot(3,2,k)
image(uint8(Asplit{k}))
end
% superimpose all blocks,
A_super = zeros(size(Asplit{1,1}),'like',Asplit{1,1} ); % init array, make sure same datatype
for k = 1:numel(Asplit)
A_super = A_super + Asplit{k};
end
% divide by max value in A and multiply by 255 to make pixel
% values fit in uint8 (0-255)
A_super_unit8 = uint8(A_super/max(A_super,[],'all')*255);
figure(3); clf;
image(A_super_unit8)
I have an image that has multi frequency noise, I used the code in this link :
Find proper notch filter to remove pattern from image
source image : orig_image
But my final image noise has not been removed.
As you know, I must remove the gradient in vertical direction. the frequency representation of image come in below:
fft of image
Have anyone idea for removal of this noise in matlab?
I apply sobel filter and median filter but not improved.
Note that my goal is remove of lines inside of object.
Regards.
You have two kinds of noise: irregular horizontal lines and salt and pepper. Getting rid of the lines is easy as long as the object doesn't cover the whole horizontal range (which it doesn't in your example).
I just sample a small vertical stripe on the left to only get the stripes and then subtract them from the whole image. Removing the salt and pepper noise is simple with a median filter.
Result:
Code:
% read the image
img = imread('http://i.stack.imgur.com/zBEFP.png');
img = double(img(:, :, 1)); % PNG is uint8 RGB
% take mean of columsn 100..200 and subtract from all columns
lines = mean(img(:, 100:200), 2);
img = img - repmat(lines, 1, size(img, 2));
% remove salt and pepper noise
img =medfilt2(img, [3,3], 'symmetric');
% display and save
imagesc(img); axis image; colormap(gray);
imwrite((img - min(img(:))) / (max(img(:)) - min(img(:))), 'car.png');
here is code I once used for an assignment I once had to do. It was a much simpler example than the one you have. There were only a total of 4 components in the frequency domain causing noise, so simply setting those 4 components to zero manually (hfreq) solved my problem. I dont know how well this will work in your case, perhaps writing an algorithm to find the appropriate hfreqs that stand out will help. Here was the original image I used :
This is the code I used :
% filtering out the noisy image
clc
clear all
close all
image = imread('freqnoisy.png');
image = double(image);
image = image/(max(max(image)));
imshow(image) % show original image
Fimage = fft2(image);
figure
imshow(((fftshift(abs(Fimage)))),[0 5000]) %shows all frequency peaks that stick out
hfreq = ones(256);
hfreq(193,193)=0;
hfreq(65,65) = 0;
hfreq(119,105) = 0;
hfreq(139,153)= 0;
%
Fimage_filtered = fftshift(Fimage).*hfreq;
figure
imshow(abs(Fimage_filtered),[0 5000]) % show freq domain without undesired freq
filtered_im = ifft2(ifftshift(Fimage_filtered));
figure
imshow(filtered_im)
this is what your output will look like :
I need to find similar corners in image (for example: 4 corners of a rectangle, same corners, just different orientation?).
I have this code:
% read the image into MATLAB and convert it to grayscale
I = imread('s2.jpg');
Igray = rgb2gray(I);
figure, imshow(I);
% We can see that the image is noisy. We will clean it up with a few
% morphological operations
Ibw = im2bw(Igray,graythresh(Igray));
se = strel('line',3,90);
cleanI = imdilate(~Ibw,se);
figure, imshow(cleanI);
% Perform a Hough Transform on the image
% The Hough Transform identifies lines in an image
[H,theta,rho] = hough(cleanI);
peaks = houghpeaks(H,10);
lines = houghlines(Ibw,theta,rho,peaks);
figure, imshow(cleanI)
% Highlight (by changing color) the lines found by MATLAB
hold on
After running this code I convert my starting image into a binary image with:
binary = im2bw(I);
after this I get a product from those 2 binary images and I think I get corners..
product = binary .* cleanI;
now I imfuse this picture with grayscale starting picture and get this:
I dont know what to do to get only those 4 corners!
Ok second try. Below a code that does not finally do the job but it might help.
Edge identifies the contours and with regionprops you get the characteristica of each identified element. As soon as you know what characteristics your desired object has you can filter it and plot it. I went through the Areas in shapedata.Area and the 6th largest was the one you were searching for. If you combine Area with some of the other charateristica you might get the one you want. As I said not ideal and final but perhaps a start ...
clear all
close all
source = imread('Mobile Phone.jpg');
im = rgb2gray(source);
bw = edge(im ,'canny',[],sqrt(2));
shapedata=regionprops (bwlabel(bw,8),'all');
%index = find([shapedata.Area]== max([shapedata.Area]));
index = 213;
data = shapedata(index).PixelList;
figure
imshow(im)
hold on
plot(data(:,1),data(:,2),'ro');
I'm trying to read the values in this image into variables using OCR in MATLAB. I'm having trouble doing so, so I tried to split up this image into smaller parts using the white boundary lines then trying to read it, but I dont know how to do this. Any help would be appreciated, thanks.
If the blocks are always delimited by a completely vertical line, you can find where they are by comparing the original image (here transformed from RGB to grayscale to be a single plane) to a matrix that is made of repeats of the first row of the original image only. Since the lines are vertical the intensity of the pixels in the first line will be the same throughout. This generates a binary mask that can be used in conjunction with a quick thresholding to reject those lines that are all black pixels in every row. Then invert this mask and use regionprops to locate the bounding box of each region. Then you can pull these out and do what you like.
If the lines dividing the blocks of text are not always vertical or constant intensity throughout then there's a bit more work that needs to be done to locate the dividing lines, but nothing that's impossible. Some example data would be good to have in that case, though.
img = imread('http://puu.sh/cU3Nj/b020b60f0b.png');
imshow(img);
imgGray = rgb2gray(img);
imgMatch = imgGray == repmat(imgGray(1,:), size(imgGray, 1), 1);
whiteLines = imgMatch & (imgGray > 0);
boxes = regionprops(~whiteLines, 'BoundingBox');
for k = 1:6
subplot(3,2,k)
boxHere = round(boxes(k).BoundingBox);
imshow(img(boxHere(2):(boxHere(2)+boxHere(4)-1), boxHere(1):(boxHere(1)+boxHere(3)-1), :));
end
You can sum along the columns of a binary image corresponding to that input image and find peaks from the sum values. This is precisely achieved in the code here -
img = imread('http://puu.sh/cU3Nj/b020b60f0b.png');
BW = im2bw(img,0.1); %// convert to a binary image with a low threshold
peak_sum_max = 30; %// max of sum of cols to act as threshold to decide as peak
peaks_min_width = 10; %// min distance between peaks i.e. min width of each part
idx = find( sum(BW,1)>=peak_sum_max );
split_idx = [1 idx( [true diff(idx)>peaks_min_width ] )];
split_imgs = arrayfun(#(x) img(:,split_idx(x):split_idx(x+1)),...
1:numel(split_idx)-1,'Uni',0);
%// Display split images
for iter = 1:numel(split_imgs)
figure,imshow(split_imgs{iter})
end
Please note that the final output split_imgs is a cell array with each cell holding image data for each split image.
If you would like to have the split images directly without the need for messing with cell arrays, after you have split_idx, you can do this -
%// Get and display split images
for iter = 1:numel(split_idx)-1
split_img = img(:,split_idx(iter):split_idx(iter+1));
figure,imshow(split_img)
end
There is now a built-in ocr function in the Computer Vision System Toolbox.
I am new to MATLAB, i wanted to know that can i extract a part of image from inside of a specified boundary,based on distinguishing color(red boundary in my case),The function,first traces the boundary of the image,then it extracts that part of image which is inside that particular boundary. I have my image(image of human head) attached, i wanted to extract the brain part from the head, other part of image should be ignored. I tried to find edges,using the following code(it shows 1's for boundaries, and 0 for no boundary),but it showed only 0's.
Any help will be greatly appreciated.
P.S. image attached shows original image and image with boundary....the code will be working on the one with boundary and will extract the part of image lying inside that boundary.
Below is the code i tried:
BW = edge(x)
BW = edge(x,'sobel')
BW = edge(x,'sobel',thresh)
BW = edge(x,'sobel',thresh,direction)
[BW,thresh] = edge(x,'sobel',...)
BW = edge(x,'prewitt')
BW = edge(x,'prewitt',thresh)
BW = edge(x,'prewitt',thresh,direction)
[BW,thresh] = edge(x,'prewitt',...)
BW = edge(x,'roberts')
BW = edge(x,'roberts',thresh)
[BW,thresh] = edge(x,'roberts',...)
BW = edge(x,'log')
BW = edge(x,'log',thresh)
BW = edge(x,'log',thresh,sigma)
[BW,threshold] = edge(x,'log',...)
BW = edge(x,'zerocross',thresh,h)
[BW,thresh] = edge(x,'zerocross',...)
BW = edge(x,'canny')
BW = edge(x,'canny',thresh)
BW = edge(x,'canny',thresh,sigma)
[BW,threshold] = edge(x,'canny',...)
since you have presented your problem domain to be CT-images. I have a good suggestion for you to extract the region of the brain tissues. There is a good assumption you can make.
A good assumption:
The brain region has no bones(normal cases) other than the cranium, and, based on some properties of CTs, you can easily extract(or remove) the bone(the cranium in this case) by looking up the Hounsfield Scale (http://en.wikipedia.org/wiki/Hounsfield_scale)
0) To get the correct housefield units, you need three elements i) original pixel value ii) rescale slope iii) rescale intercept (all three can be located in the original dicom header and HU can the be computed based on our high school math knowledge: y=mx+b, since you have the intercept, the slope and the input value).
1) Once you know where the bone is, you just need to subtract your image to get anything bounded by the cranium.
2) And upon looking at your matlab codes, im sure you can perform step 1) to segment the right regoin from the leftovers.
Just for the record. Mathematica code:
Edit
If you want only to extract the brain without tracing the outline, it is actually easier: