I am doing a pansharpening fusion with one multi=spectral image and 1 Pan Image from LANDSAT 8.
The problem happens when I try to apply the AWLP algorithm from de Pansharpening Toolbox (author: Gemine Vinone et al) because the algorithm inside itself uses an 2D non-decimated wavelet transform function called ndwt2.
http://www.codeforge.com/read/255069/ndwt2.m__html <==== this file doesn't work for me
That function apparently was in the Wavelet Toolbox from MATLAB 2 or more years ago. I don't know how the algorithm works, so I need to know how to use swt2, dwt2, etc... to do the same operation that ndwt2 did.
I've been searching on the whole web, even on the third Google page xD, but have not found a solution.
Related
I worked on the problem of handwritten recognition images. For this, I use support vector machines as a classifier . the matrix score shows an example of the scores returned by svm for 5 samples. the number of classes is also 5. I want to transform this matrix into probabilities.
score=[ 0,2590 -0,6033 -1,1350 -1,2347 -0,9776
-1,4727 -0,2136 -0,9649 0,1480 -1,4761
-0,9637 -0,8662 0,0674 -1,0051 -1,1293
-2,1230 -0,8805 -0,9808 -0,0520 -0,0836
-1,6976 -1,1578 -0,9205 -1,1101 1,0796]
According to research on existing methods, I found that the Platt's scaling method is most appropriate in my case. I found an implementation of this method on this link Platt scaling but the problem is that I don't understand the third parameter to enter. Please, help me to understand this implementation and to make it executable
I await your answers and thank you in advance
I'm playing with ARPACK. I looked into the examples they provide, zndrv4.f, in the ARPACK/EXAMPLES/COMPLEX/ directory. I also came cross into NAG Fortran Library. In NAG, there are some linear problem solvers F12***. The F12*** routines in NAG are equivalent to the znaupd in ARPACK. So I want to check if they will yield the same results.
I first looked into the example provided in the user guide of F12ARF at http://www.nag.co.uk/numeric/fl/nagdoc_fl22/pdf/F12/f12arf.pdf for example. In the end, it yields the results of
509.9390
380.9092
659.1558
271.9412
around the shift=500. I solved the same generalized eigenvalue problem in Matlab. Matlab gave the same results.
But when I used znaupd from ARPACK to solve the same problem, I obtained different answers. The 4 eigenvalues are now
rd1 = 501.65650188259684
rd2 = 480.15153312181440
rd3 = 526.52596256924164
rd4 = 461.99019999608828
The routines in NAG and ARPACK both use SHIFTED INVERSE mode and solve a generalized problem.
I'm not sure what was wrong. I attached my scripts for the ARPACK zndrv4.f (it's basically the same as the example file provided by ARPACK, I just need to change a little bit the matrices around line 174 to be the same as that in NAG.) and the Matlab file zndrv4.m.
https://www.dropbox.com/s/b9f1btl7a2ugrh3/zndrv4.f?dl=0
https://www.dropbox.com/s/pctmennp64mkn9m/zndrv4.m?dl=0
Update: The M matrix in F12ARF is normalized (entries divided by a six). I followed this and got the above wrong results in ARPACK. Now if I didn't divide the entries by six, the ARPACK script gives me the correct answer (the same as Matlab). Now I'm even more confused. It seems to say that the ARPACK routine is not robust?
I found that there is a method called imquantize in order to make a quantization of the input image. I'm trying to make a quantization using 4-bits. I did the following:
>> quan = imquantize(im,16);
But, I got the following error:
??? Undefined function or method 'imquantize' for input arguments of type 'unit8'
How can I go around this?
Thanks.
imquantize is a new function introduced in Image Processing Toolbox R2012b. You can check which image processing toolbox is installed on your matlab (if any) by invoking this command:
ver
imquantize is new to the 2012b release of the Image Processing Toolbox. Therefore, if you don't have 2012b (released last month), you won't have imquantize.
In any case, your usage of imquantize would be incorrect. imquantize takes, as a second argument, a vector of levels to be used as the threshold. Those levels could, for example, be obtained through multithresh (which is also new to IPT 2012b).
i've this photo :
and i'm trying to make Document binarization using niblack algorithm
i've implemented the simple Niblack algorithm
T = mean + K* standardDiviation
and that was it's result:
the problem is there's some parts of the image in which the window doesn't contain any objects so it detects the noise as objects and elaborates them .
i tried to apply blurring filter then global thresholding
that was the result :
which wont be solved by any other filter
i guess the only solution is preventing the algorithm from detecting global noise if the window i free from object
i'm interested to do this using niblack algorithm not using other algorithm so any suggestions ?
i tried sauvola algorithm in this paper Adaptive document image binarization J. Sauvola*, M. PietikaKinen section 3.3
it's a modified version of niblack algorithm which uses a modified equation of niblack
which returned a pretty good answers :
as well as i tried another modification of Niblack which is implemented in this paper
in the 5.5 Algorithm No. 9a: Université de Lyon, INSA, France (C. Wolf, J-M Jolion)
which returned a good results as well :
Did you look here: https://stackoverflow.com/a/9891678/105037
local_mean = imfilter(X, filt, 'symmetric');
local_std = sqrt(imfilter(X .^ 2, filt, 'symmetric'));
X_bin = X >= (local_mean + k_threshold * local_std);
I don't see many options here if you insist to use niblack. You can change the size and type of the filter, and the threshold.
BTW, it seems that your original image has colors. This information can significantly improve black text detection.
There are range of methods that can help in this situation:
Of course, you can change algorithm it self =)
Also it is possible just apply morphology filters: first you apply maximum in the window, and after - minimum. You should tune windows size to achieve a better result, see wiki.
You can choose the hardest but the better way and try to improve Niblack's scheme. It is necessary to increase Niblack's windows size if standard deviation is smaller than some fixed number (should be tuned).
i tried the niblack algorithm with k=-0.99 and windows=990 using optimisation:
Shafait – “Efficient Implementation of Local Adaptive Thresholding
Techniques Using Integral Images”, 2008
with : T = mean + K* standardDiviation; i have this result :
the implementation of algorithm is taken here
I want to know how can this function(from MATLAB) resize the columns of an input image using weights an indices previously computed.
Which equations uses to do that?
resizeColumnsCore(double(in), weights', indices');
When I looked for a function called resizeColumnsCore in MATLAB 7.11.0 (R2010b) I didn't find anything. However, I did find a MEX-file by that name in MATLAB 7.8.0 (R2009a) in this subdirectory of the Image Processing Toolbox:
C:\Program Files\MATLAB\R2009a\toolbox\images\images\private\
I guess they've phased it out or replaced it with another function in newer MATLAB versions. Now, if you want to know what the MEX-file does, you need to look at the source code it is compiled from. Luckily, it appears that this source code resizeColumnsCore.cpp can be found in the following directory:
C:\Program Files\MATLAB\R2009a\toolbox\images\images\private\src\misc\
And you can look through that code to determine the algorithms used to resize the columns of an image given a set of weights and indices.
Now, if you want to know how these input arguments to resizeColumnsCore are computed, you'll have to look at the code of a function that calls it. I know of at least one function in the IPT that calls this function: IMRESIZE. If you type edit imresize at the command prompt it will open that function in the Editor, allowing you to look through the code so you can see how the arguments to resizeColumnsCore are created.
What I can tell you for R2009a is that there is a subfunction in the file imresize.m called contributions which computes the weights and indices that are ultimately passed as arguments to resizeColumnsCore. That is where you will want to start looking to determine what algorithms are used to compute these arguments.
Looks like this isn't a proprietary MATLAB function. Could we see some code or a link to the code?