How to filter speckle noise? - matlab

I am trying to remove a speckle noise from an image, all my research is pointing me at using a Knox-Thompson method, developed by astronomers, but I can't find any information about it, much less an algorithm.
What is Knox-Thompson method and what algorithm does it use?

You can have a look at despeckle plug-in od GIMP
https://git.gnome.org/browse/gimp/tree/plug-ins/common/despeckle.c
Also, despeckling implementation in Scantailor
https://github.com/scantailor/scantailor/tree/f1711c941ea0a6d78b07f714d5ff33715ba33491/filters/output

Related

MATLAB: How to Retrieve Intensity-Based Registration Data (with imregister) to re-Perform Registration?

I thought this should be a simple task, I just can't find the way to do it:
I am using 'imregister' (MATLAB) to register two medical X-ray images.
To ensure I get the best registration outcome as possible, I use some image processing techniques such as contrast enhancement, blackening of objects that are different between images and even cropping.
The outcome of this seems to be quite satisfying.
Now, I want to perform the exact same registaration on the original images, so that I can display the two ORIGINAL images automatically in alignment.
I think that an output parameter such as tform serves this purpose of performing a certain registration on any two images, but unfortunately 'imregister' does not provide such a parameter, as far as I know.
It does provide as an output the spatial referencing object R_reg which might be the answer, but I still haven't figured out how to use it to re-preform the registration.
I should mention that since I am dealing with medical X-ray images on which non of the feature-detecting algorithms seem to work well enough to perform registration, I can only use intensity-based (as opposed to feature-based) registration, and therefore am using 'imregister'.
Does anyone know how I can accomplish this?
Thanks!
Noga
So to make an answer out of my comment, there are 2 things you can do depending on the Matlab release you are using:
Option 1: R2013a and earlier
I suggest modifying the built-in imregister function by forcing tform to be an output and save that function under another name.
For example:
function [movingReg,Rreg,tform] = imregister2(varargin)
save that, add it to your path and you're good to go. If you type edit imregister you will notice that the 1st line calls imregtform to get the geometric transformation required, while the last line calls imwarp, to apply that geometric transformation. Which leads us to Option 2.
Option 2: R2013b and later
Well in that case you can directly use imregtform to get the tform object and then use imwarpto apply it. Easy isn't it?
Hope that makes things clearer!

How to Extract part of Image using Matlab

i have not used matlab much, i need to extract the part of Left and Right Coronary Arteries of a heart from a given heart image.
this is my image,
based on morphological operations, this is what i have come up with,
f=imread('heart.jpg');
diam=strel('diamond',19);
line=strel('line',10,90);
linef=imclose(f,line);
line120=strel('line',10,120);
line120f= imclose(f,line120);
bothline=linef+line120f;
diamf=imclose(f,diam);
arterybm=diamf-bothline;
binaryartery= im2bw(arterybm,0);
mask=cast(binaryartery,class(f));
redPlane=f(:,:,1);
greenPlane=f(:,:,2);
bluePlane=f(:,:,3);
maskedRed=redPlane.*mask;
maskedGreen=greenPlane.*mask;
maskedBlue=bluePlane.*mask;
maskedRGBImage=cat(3,maskedRed,maskedGreen,maskedBlue);
subplot(2,3,1);imshow(f);title('Input Image');subplot(2,3,2);imshow(diamf);title('imclose with Diamond Mask');subplot(2,3,3);imshow(bothline);title('imclose with Line 120 and 90 mask');subplot(2,3,4);imshow(arterybm);title('Difference of line and diamond');subplot(2,3,5);imshow(binaryartery);title('Convert to binary image');subplot(2,3,6);imshow(maskedRGBImage);title('Apply mask to input image');
is there any better approach ?
This task is quite a difficult one, worth academic article if you find solution working flawlessly in most cases. My suggestion: search for articles on the topic, and also try "Matlab File Exchange" (http://www.mathworks.com/matlabcentral/fileexchange/). If you are very lucky, someone might have already solved this problem and posted a solution.
Have a look at the Frangi filter (aka ridge-detection filter), it is meant to detect blood vessels.
There is an implementation available on the file exchange:
http://www.mathworks.com/matlabcentral/fileexchange/24409-hessian-based-frangi-vesselness-filter

segmenting human point cloud into 6 main parts

I have a point cloud of a human and want to segemnt it into 6 main parts including:
hands, feet, head, ...
how can I do this using opencv or pcl library or matlab? which segmentation or clustring algurithm can I use?
I think, that graph cut algorithms may be intresting for you, try search by phrase:
"graph cut mesh segmentation".
And also look here: http://people.cs.umass.edu/~kalo/papers/LabelMeshes/
pcl has a pass-through filter, which allows you to apply split-planes on the 3d cloud
(no idea, why anyone'd downvote on this, btw)

Difference between cvPOSIT and cvFindExtrinsicCameraParams2

Another OpenCV question;
Without me having to implement 2 versions - can anyone enlighten me to what the differences are between cvPOSTIT and cvFindExtrinsicCameraParams2 and maybe the advantages of each.
The inputs and outputs appear to be the same.
From my experience, cvFindExtrinsicCameraParams2() works for coplanar points (so it is probably an implementation of http://dl.acm.org/citation.cfm?id=228149), while cvPOSIT() doesn't. But I am not 100% sure.
It appears that cvPOSIT() only exists in OpenCV's old C API and not in the new C++ API. Conversely, cvFindExtrinsicCameraParams2() is in both. While not a perfect indicator, my best guess is that they both implement the POSIT algorithm with minor modifications and the former exists only for legacy reasons.
Beyond that, your guess is good as mine. If you want a definitive answer, I suggest asking on the OpenCV mailing list.
I've used cvPOSIT already. It only works on 3D non-coplanar points on the object. Because it bases on the algorithm from "DAVIS, D. F. D. A. L. S. 1995. Model-Based Object Pose in 25 Lines of Code". So you will have to find a way around for coplanar features
With cvFindExtrinsicCameraParams2(), it also works on planar features, solve the transformation using cvFindHomography and then refine the result by levenberg-marquardt approximation. For non-coplanar points, the preprocessing is done by a different method DLT (Direct Linear Transformation) (not ".. 25 lines of Code" article anymore)
I'm not pretty sure about thier performance, which one is faster. As I know, ".. 25 lines of code" is very fast, and suitable for realtime vision up to now.

trainning neural network

I have a picture.1200*1175 pixel.I want to train a net(mlp or hopfield) to learn a specific part of it(201*111pixel) to save its weight to use in a new net(with the same previous feature)only without train it to find that specific part.now there are this questions :what kind of nets is useful;mlp or hopfield,if mlp;the number of hidden layers;the trainlm function is unuseful because "out of memory" error.I convert the picture to a binary image,is it useful?
What exactly do you need the solution to do? Find an object with an image (like "Where's Waldo"?). Will the target object always be the same size and orientation? Might it look different because of lighting changes?
If you just need to find a fixed pattern of pixels within a larger image, I suggest using a straightforward correlation measure, such as crosscorrelation to find it efficiently.
If you need to contend with any of the issues mentioned above, then there are two basic solutions: 1. Build a model using examples of the object in different poses, scalings, etc. so that the model will recognize any of them, or 2. Develop a way to normalize the patch of pixels being examined, to minimize the effect of those distortions (like Hu's invariant moments). If nothing else, yuo'll want to perform some sort of data reduction to get the number of inputs down. Technically, you could also try a model which is invariant to rotations, etc., but I don't know how well those work. I suspect that they are more tempermental than traditional approaches.
I found AdaBoost to be helpful in picking out only important bits of an image. That, and resizing the image to something very tiny (like 40x30) using a Gaussian filter will speed it up and put weight on more of an area of the photo rather than on a tiny insignificant pixel.