I want to take FFT of following forms:
Fourier in spherical coordinates
So in fact, my f(theta,phi) is a 3D image voxel, ie it has f(radius,theta,phi). If I had to take integral along dx, that would amount to just selecting my IMG(ylocation,:,zlocation)=currentvoxel and running fft(currentvoxel) on it. However, now I have to only take fft along phi. Which is hard because it requires for me to find all the locations with theta=currenttheta and for each r=currentr, I have to find all the phi's to take fft.
I don't even know how to unroll the phi to take fft. When it's Cartesian coordinates, it's easy, just start from x=0,..x=end and run fft. Is phi case the same? Get all the phi's on a certain r,theta and sort them 0 to 2pi and take fft?
Or maybe sine/cosine transforms like this:
cosine/sine transform
If this was a f(x,y), I think the way to do this in MATLAB would be to take dct(f*(x.^2+y.^2)). In other words running a discrete cosine transform. Am I right?
Extra clarification on second part:
The integral I'm interested in is: bessel
where j_l(kr): expanded j_l
and
expanded further
finally the F_(l)(t) is given by previous: cosine/sine transform
Related
For a project in the University I am working with several "Quality Assessement" metrics on Finger-Vein images.
Now I try to implement a metric that uses the Radon Transform and I got stuck at some point doing this in Matlab.
My problem is as follows:
I got the following formula for the Radon Transform. In the first steps I used the built in one in Matlab, but for further implementing the metric I need the derivation of the thing for the Curvature of the curve.
the delta is the dirac-delta function.
Derivation:
So my intention is to calculate the Radon Transform on my own with the formula but my problem is that F(x,y) is the gray value of the pixel located at (x,y). And so I need a Function F(x,y) that gives me the gray value of the pixel that I can put in to calculate the derivates and the double integral.
How can I get such a function? Or got I do some kind of "Curve Fitting" with my values of the pixels that I get a function?
Thanks in advance.
As I understand your question, there are two things that you could do:
Compute the derivatives of the Radon transform numerically (as suggested by Ander Biguri in a comment above). If you compute the Radon transform carefully, it will be a band-limited function, making the computation of derivatives possible. See this paper for some ideas on how to enforce a band-limited transform:
"The generalized Radon transform: sampling, accuracy and memory considerations" (PDF).
Compute the derivatives of the image numerically, then sample those derivatives to compute your C function. That is, you compute dF/dx, dF/dy, d^2F/dx^2, and whichever derivatives you need as images. You can interpolate into these derivatives if you need more precision.
IMO the best way to compute derivatives of a discrete image is through Gaussian derivatives. Note that this applies to both solutions above. For example dF/dx (Fx) can be computed by (see here for more details):
h = fspecial('gaussian',[1,2*cutoff+1],sigma);
dh = h .* (-cutoff:cutoff) / (-sigma^2);
Fx = conv2(dh,h,F,'same');
PS: sorry for all the self-references, but I have worked on these topics quite a bit in the past. :)
I have a a bunch of data that I'd like to use to find an unknown parameter in a physical equation.
I'm trying to find a parameter k to characterise the output of a hall effect sensor as a function on input voltage and distance between the sensor and the magnet. However, I've found this function to be inversely proportional to the square of the distance.
I asked my professor about how to use MATLAB to find the unknown parameter, and he told me I could try to fit it by taking the logarithm of both sides of the equation and plotting that, seen as that would make the relationship linear and thus easier to plot.
I'd have to do this in MATLAB and I'm assuming the values I measured would have to be converted by hand before being able to perform any sort of curve fitting on them.
I was wondering if doing that was worth it, and if there is a faster way of doing this.
Thanks :)
In order to easily identify the relationship, for a set input voltage, I had to take the logarithm of the measured distance and the logarithm of the respective output voltages and plot those. Fitting a line through those points then enabled me to see that the coefficient was close enough to -2, confirming the inverse square relationship.
I then did the same for different input voltages and added everything together on the same plot.
While trying to understand Fast Fourier Transform I encountered a problem with the phase. I have broken it down to the simple code below. Calculating one period of a 50Hz sinewave, and applying an fft algorithm:
fs = 1600;
dt = 1/fs;
L = 32;
t=(0:L-1)*dt;
signal = sin(t/0.02*2*pi);
Y = fft(signal);
myAmplitude = abs(Y)/L *2 ;
myAngle = angle(Y);
Amplitude_at_50Hz = myAmplitude(2);
Phase_at_50Hz = myAngle(2);
While the amplitude is ok, I don't understand the phase result. Why do I get -pi/2 ? As there is only one pure sinewave, I expected the phase to be 0. Either my math is wrong, or my use of Matlab, or both of them... (A homemade fft gives me the same result. So I guess I am stumbling over my math.)
There is a similar post here: MATLAB FFT Phase plot. However, the suggested 'unwrap' command doesn't solve my problem.
Thanks and best regards,
DanK
The default waveform for an FFT phase angle of zero is a cosine wave which starts and ends in the FFT window at 1.0 (not a sinewave which starts and ends in the FFT window at 0.0, or at its zero crossings.) This is because the common nomenclature is to call the cosine function components of the FFT basis vectors (the complex exponentials) the "real" components. The sine function basis components are called "imaginary", and thus infer a non-zero complex phase.
That is what it should be. If you used cosine, you would have found a phase of zero.
Ignoring numerical Fourier transforms for a moment and taking a good old Fourier transform of sin(x), which I am too lazy to walk through, we get a pair of purely imaginary deltas.
As for an intuitive reason, recall that a discrete Fourier transform is averaging a bunch of points along a curve in the complex plane while turning at the angular frequency of the bin you're computing and using the amplitude corresponding to the sample. If you sample a sine curve while turning at its own frequency, the shape you get is a circle centered on the imaginary axis (see below). The average of that is of course going to be right on the imaginary axis.
Plot made with wolfram alpha.
Fourier transform of a sine function such as A*sin((2*pi*f)*t) where f is the frequency will yield 2 impulses of magnitude A/2 in the frequency domain at +f and -f where the associated phases are -pi/2 and pi/2 respectively.
You can take a look at its proof here:
http://mathworld.wolfram.com/FourierTransformSine.html
So the code is working fine.
First I describe the physics, it is in a axisymmetric space, one sound source was placed at the original point, one sensor was placed on the axis under the source. Giving the source wave form, I try to get the sensor's waveform. all materiel parameter were known, for instance, sound speed, density.
I write the Matlab script to calculate it, by solving the sound propagation equation I can get
one function, say, A(w,k), w is frequency and k is wavenumber, this is so called frequency-wavenumber field. My matlab code like this,
discrete w and k, get a A array. first use FFT to k, get space and frequency information
then, FFT to w, get space and time information, that is the waveform at different point.
the fake code
for i_w=...
w=...
for i_k=...
k=...
M=A(w,k)
end
wave_space_freq=ifft(M)
end % here can specify the only point of the sensor
wave_space_freq=ifft(wave_space_freq)
My question is do I need to make conjugation and flip when I use IFFT,like ifft(M,0,fliplr(conj(M))) . because I saw some-others use them, but I don't understand why?
If you want a strictly real-valued result waveform (not complex with significant imaginary components), then the input to an IFFT has to be conjugate symmetric, such as:
ifft(dc_term,M,0,fliplr(conj(M))).
In MATLAB I need to generate a second derivative of a gaussian window to apply to a vector representing the height of a curve. I need the second derivative in order to determine the locations of the inflection points and maxima along the curve. The vector representing the curve may be quite noise hence the use of the gaussian window.
What is the best way to generate this window?
Is it best to use the gausswin function to generate the gaussian window then take the second derivative of that?
Or to generate the window manually using the equation for the second derivative of the gaussian?
Or even is it best to apply the gaussian window to the data, then take the second derivative of it all? (I know these last two are mathematically the same, however with the discrete data points I do not know which will be more accurate)
The maximum length of the height vector is going to be around 100-200 elements.
Thanks
Chris
I would create a linear filter composed of the weights generated by the second derivative of a Gaussian function and convolve this with your vector.
The weights of a second derivative of a Gaussian are given by:
Where:
Tau is the time shift for the filter. If you are generating weights for a discrete filter of length T with an odd number of samples, set tau to zero and allow t to vary from [-T/2,T/2]
sigma - varies the scale of your operator. Set sigma to a value somewhere between T/6. If you are concerned about long filter length then this can be reduced to T/4
C is the normalising factor. This can be derived algebraically but in practice I always do this numerically after calculating the filter weights. For unity gain when smoothing periodic signals, I will set C = 1 / sum(G'').
In terms of your comment on the equivalence of smoothing first and taking a derivative later, I would say it is more involved than that. As which derivative operator would you use in the second step? A simple central difference would not yield the same results.
You can get an equivalent (but approximate) response to a second derivative of a Gaussian by filtering the data with two Gaussians of different scales and then taking the point-wise differences between the two resulting vectors. See Difference of Gaussians for that approach.