Has anyone else noticed that the outputs of MATLAB's rgb2hsv() and OpenCV's cvtColor() (with the argument thereof being CV_BGR2HSV) appear to be calculated slightly differently?
For one, MATLAB's function maps any image input to the [0,1] interval, while OpenCV maintains the same interval of the input (i.e. an image with pixels in [0,255] in RGB keeps the same [0,255] interval in HSV).
But more importantly, when normalizing the cvtColor() output (e.g. mat = mat / 255), the values are not quite the same.
I couldn't find anything in the documentations about the specific formulas they use. Can anyone please shed some light on these differences?
For OpenCV the formula is right there in the document you point to. For Matlab, have a look here http://www.mathworks.com/matlabcentral/newsreader/view_thread/269237:
Just dive into the code - they gave it to you. Just put the cursor on
the function rgb2hsv() in your code and type control-d.
Related
I am very new to mat lab and I am trying to get to grips with integrating under a curve.
I wanted to see this difference between using the 'trapz(y)' and 'trapz(x,y)' to find the area under a curve of a Gaussian function what I can not seem to understand is why I am getting two different area values and I am trying to figure which one is more accurate.
dataset = xlsread('Lab 3 Results 11.10.18 (1).xlsx','Sheet3','C6:D515');
x=dataset(:,1);
a1=38.38;
b1=1179;
c1=36.85;
d1=6.3
y=a1*exp(-((x-b1)/c1).^2)-d1;
int1=trapz(x,y)
int2=trapz(y)
So when I run this code I get int1=1738.3 and int2=5.78.4 but when I integrated this function by hand using the trapezium rule my ans came out to be nearer int1 rather that int2 is there anyone that could shed some light on this if possible? I just cant imagen visulay how matlab is using the trapz rule two different ways,
Neither implementation is more accurate, but trapz(y) assumes unit spacing of each data point (e.g., spacing between data points is uniformly x = 1). See trapz documentation.
Since you know the x-coordinates, use trapz(x,y).
I'm using Matlab for data fitting for the first time and I cannot get it to fit my data properly. I have a few hundreds measured values which I normalise from 0-1 (see the linked image).
I then want to fit the data with a modified ERF namely: 0.5 + {0.5*[erf(x/(2*(t*d)^(1/2)))]}. I want to extrapolate the value for t hence I even tried assigning a value to d (which is a know constant anyway) and substitute the initial 0.5 with a constant of unknown value to be found: a + {0.5*[erf(x/(2*(t*6E-20)^(1/2)))]}. I also tried using ERFC instead of ERF.
However, I always get a very steep fitted curve which does not match my data.
I know that, given a fixed d, I should get a value for t of around 3-7 from Matlab as I can fit the data well in Excel (in a qualitative way, ie: by eye) with the given function and a value for t of 3-7.
I should probably mention that the fit in excel is done by finding the inflection point and using a slightly different equation to model the data above and below the inflection point. I also tried this method in Matlab but still could not make it work.
For some reason the fit from Matlab always returns the same value for t as the inserted start point and I always get the same steep curve no matter what method I use. My limits for t are +/-inf.
What am I doing wrong?
Thanks!
Giuseppe
Real data and fitted curve image
Due to the nature of my problem, I want to evaluate the numerical implementations of the Radon transform in Matlab (i.e. different interpolation methods give different numerical values).
while trying to code my own Radon, and compare it to Matlab's output, I found out that my radon projection sizes are different than Matlab's.
So a bit of intuition of how I compute the amount if radon samples needed. Let's do the 2D case.
The idea is that the maximum size would be when the diagonal (in a rectangular shape at least) part is proyected in the radon transform, so diago=sqrt(size(I,1),size(I,2)). As we dont wan nothing out, n_r=ceil(diago). n_r should be the amount of discrete samples of the radon transform should be to ensure no data is left out.
I noticed that Matlab's radon output is always even, which makes sense as you would want a "ray" through the rotation center always. And I noticed that there are 2 zeros in the endpoints of the array in all cases.
So in that case, n_r=ceil(diago)+mod(ceil(diago)+1,2)+2;
However, it seems that I get small discrepancies with Matlab.
A MWE:
% Try: 255,256
pixels=256;
I=phantom('Modified Shepp-Logan',pixels);
rd=radon(I,pi/4);
size(rd,1)
s=size(I);
diagsize=sqrt(sum(s.^2));
n_r=ceil(diagsize)+mod(ceil(diagsize)+1,2)+2
rd=
367
n_r =
365
As Matlab's Radon transform is a function I can not look into, I wonder why could it be this discrepancy.
I took another look at the problem and I believe this is actually the right answer. From the "hidden documentation" of radon.m (type in edit radon.m and scroll to the bottom)
Grandfathered syntax
R = RADON(I,THETA,N) returns a Radon transform with the
projection computed at N points. R has N rows. If you do not
specify N, the number of points the projection is computed at
is:
2*ceil(norm(size(I)-floor((size(I)-1)/2)-1))+3
This number is sufficient to compute the projection at unit
intervals, even along the diagonal.
I did not try to rederive this formula, but I think this is what you're looking for.
This is a fairly specialized question, so I'll offer up an idea without being completely sure it is the answer to your specific question (normally I would pass and let someone else answer, but I'm not sure how many readers of stackoverflow have studied radon). I think what you might be overlooking is the floor function in the documentation for the radon function call. From the doc:
The radial coordinates returned in xp are the values along the x'-axis, which is
oriented at theta degrees counterclockwise from the x-axis. The origin of both
axes is the center pixel of the image, which is defined as
floor((size(I)+1)/2)
For example, in a 20-by-30 image, the center pixel is (10,15).
This gives different behavior for odd- or even-sized problems that you pass in. Hence, in your example ("Try: 255, 256"), you would need a different case for odd versus even, and this might involve (in effect) padding with a row and column of zeros.
I have a question regarding the parameters in the edge function.
edge(img,'sobel',threshold);
edge(img,'prewitt',threshold) ;
edge(img,'roberts',threshold);
edge(img,'canny',thresh_canny,sigma);
How should the threshold for the first 3 types be chosen? Is there an aspect that can help choosing this threshold (like histogram for instance)? I am aware of the function graythresh but I want to set it manually. So far I know it's a value between 0-1, but I don't know how to interpret it.
Same thing for Canny. I`m trying to input an array for thresh_canny = [low_limit, high_limit]. but don't know how to look at these values. How does the sigma value influence the image?
It really depends on the type of edge you want to see in the output. If you want to see really powerful edges, use a smaller interval in the higher end of threshold (say 0.9-1) and this is relative to highest gradient magnitude of the image.
As far as the sigma is concerned, it is used in filtering the input image before passing to edge. This is to reduce noise in the input image.
My objective is to handle illumination and expression variations on an image. So I tried to implement a MATLAB code in order to work with only the important information within the image. In other words, to work with only the "USEFUL" information on an image. To do that, it is necessary to delete all unimportant information from the image.
Reference: this paper
Lets see my steps:
1) Apply the Histogram Equalization in order to get an histo_equalized_image=histeq(MyGrayImage). so that large intensity variations
can be handled to some extent.
2) Apply svd approximations on the histo_equalized_image. But before to do that, I applied the svd decomposition ([L D R]=svd(histo_equalized_image)), then these singular values are used to make the derived image J=L*power(D, i)*R where i varies between 1 and 2.
3) Finally, the derived image is combined with the original image to: C=(MyGrayImage+(a*J))/1+a. Where a varies from 0 to 1.
4) But all the steps above are not able to perform well under varying conditions. So finally, wavelet transform should be used to handle those variations(we use only the LL image bloc). Low frequency component contains the useful information, also, unimportant
information gets lost in this component. The (LL) component is ineffective with illumination changes and expression variations.
I wrote a matlab code for that, and I would like to know if my code is correct or no (if no, so how to correct it). Furthermore, I am interested to know if I can optimize these steps. Can we improve this method? if yes, so how? Please I need help.
Lets see now my Matlab code:
%Read the RGB image
image=imread('img.jpg');
%convert it to grayscale
image_gray=rgb2gray(image);
%convert it to double
image_double=im2double(image_gray);
%Apply histogram equalization
histo_equalized_image=histeq(image_double);
%Apply the svd decomposition
[U S V] = svd(histo_equalized_image);
%calculate the derived image
P=U * power(S, 5/4) * V';
%Linearly combine both images
J=(single(histo_equalized_image) + (0.25 * P)) / (1 + 0.25);
%Apply DWT
[c,s]=wavedec2(J,2,'haar');
a1=appcoef2(c,s,'haar',1); % I need only the LL bloc.
You need to define, what do you mean by "USEFUL" or "important" information. And only then do some steps.
Histogram equalization is global transformation, which gives different results on different images. You can make an experiment - do histeq on image, that benefits from it. Then make two copies of the original image and draw in one black square (30% of image area) and white square on second. Then apply histeq and compare results.
Low frequency component contains the useful information, also,
unimportant information gets lost in this component.
Really? Edges and shapes - which are (at least for me) quite important are in high frequencies. Again we need definition of "useful" information.
I cannot see theoretical background why and how your approach would work. Could you a little bit explain, why do you choose this method?
P.S. I`m not sure if this papers are relevant to you, but recommend "Which Edges Matter?" by Bansal et al. and "Multi-Scale Image Contrast Enhancement" by V. Vonikakis and I. Andreadis.