I have written the following code to calculate required transmission power based on distance between the sender and receiver and SNR threshold at the receiver. However I get huge values for required Intensity(Req_I) and Required Transmission Power (Req_Pt). Please Suggest the solution if I am making any mistake in the technique to calculate the transmission power or in the code itself.
Best Regards
Pt=12; %Transmit power in watts
spreading=1.5; %Spreading factor
f=10; %Frequency in Kilo Hz.
d=0.5; %Distance in Kilo Meters.
NL=47.69; %Noise Level in db
DI=0; %Directivity Index
pi=3.14159265359;
SNRth=17; %SNR threshold in db
%absorption=10^((0.002+0.11*(f^2/((1+f^2)+0.011*f^2)))/10); %Absorption factor
absorption=10^((0.11*(f^2/(1+f^2))+44*(f^2/(4100+f^2))+2.75*10^(-4)*f^2+0.003)/10);
TL=(d^spreading)*(absorption^d); %Transmission Loss
Req_SL=SNRth+TL+NL+DI; %Required Source Level
Req_I=((10^Req_SL)/10)*(0.67*10^(-18)); %Required Intensity
Req_Pt=Req_I*4*pi; %Required Transmission Power
The calculation of your TL factor is probably wrong, maybe you forgot to take the logarithm of it?
I don't know where your formulas come from, nor your specific application. If you don't have the correct formulas, you can take a look at this pdf, which provides expressions for the TL factor due to attenuation and spreading.
Check your units.
The original paper gives an expression (3) for low frequency propagation, the one you have used, but requires the input to be in kHz, not Hz. Either you should use
f = 10*1e-3; %frequency in kHz
or you should be using formula (2). Also note that the the attenuation is in dB/km, so you should convert your distance too, unless you actually are interested in propagating 500 km.
Related
I am trying to measure the PSD of a stochastic process in matlab, but I am not sure how to do it. I have posted the exact same question here, but I thought I might have more luck here.
The stochastic process describes wind speed, and is represented by a vector of real numbers. Each entry corresponds to the wind speed in a point in space, measured in m/s. The points are 0.0005 m apart. How do I measure and plot the PSD? Let's call the vector V. My first idea was to use
[p, w] = pwelch(V);
loglog(w,p);
But is this correct? The thing is, that I'm given an analytical expression, which the PSD should (in theory) match. By plotting it with these two lines of code, it looks all wrong. Specifically it looks as though it could need a translation and a scaling. Other than that, the shapes of the two are similar.
UPDATE:
The image above actually doesn't depict the PSD obtained by using pwelch on a single vector, but rather the mean of the PSD of 200 vectors, since these vectors stems from numerical simulations. As suggested, I have tried scaling by 2*pi/0.0005. I saw that you can actually give this information to pwelch. So I tried using the code
[p, w] = pwelch(V,[],[],[],2*pi/0.0005);
loglog(w,p);
instead. As seen below, it looks much nicer. It is, however, still not perfect. Is that just something I should expect? Taking the squareroot is not the answer, by the way. But thanks for the suggestion. For one thing, it should follow Kolmogorov's -5/3 law, which it does now (it follows the green line, which has slope -5/3). The function I'm trying to match it with is the Shkarofsky spectral density function, which is the one-dimensional Fourier transform of the Shkarofsky correlation function. Is it not possible to mark up math, here on the site?
UPDATE 2:
I have tried using [p, w] = pwelch(V,[],[],[],1/0.0005); as I was suggested. But as you can seem it still doesn't quite match up. It's hard for me to explain exactly what I'm looking for. But what I would like (or, what I expected) is that the dip, of the computed and the analytical PSD happens at the same time, and falls off with the same speed. The data comes from simulations of turbulence. The analytical expression has been fitted to actual measurements of turbulence, wherein this dip is present as well. I'm no expert at all, but as far as I know the dip happens at the small length scales, since the energy is dissipated, due to viscosity of the air.
What about using the standard equation for obtaining a PSD? I'd would do this way:
Sxx(f) = (fft(x(t)).*conj(fft(x(t))))*(dt^2);
or
Sxx = fftshift(abs(fft(x(t))))*(dt^2);
Then, if you really need, you may think of applying a windowing criterium, such as
Hanning
Hamming
Welch
which will only somehow filter your PSD.
Presumably you need to rescale the frequency (wavenumber) to units of 1/m.
The frequency units from pwelch should be rescaled, since as the documentation explains:
W is the vector of normalized frequencies at which the PSD is
estimated. W has units of rad/sample.
Off the cuff my guess is that the scaling factor is
scale = 1/0.0005/(2*pi);
or 318.3 (m^-1).
As for the intensity, it looks like taking a square root might help. Perhaps your equation reports an intensity, not PSD?
Edit
As you point out, since the analytical and computed PSD have nearly identical slopes they appear to obey similar power laws up to 800 m^-1. I am not sure to what degree you require exponents or offsets to match to be satisfied with a specific model, and I am not familiar with this particular theory.
As for the apparent inconsistency at high wavenumbers, I would point out that you are entering the domain of very small numbers and therefore (1) floating point issues and (2) noise are probably lurking. The very nice looking dip in the computed PSD on the other hand appears very real but I have no explanation for it (maybe your noise is not white?).
You may want to look at this submission at matlab central as it may be useful.
Edit #2
After inspecting documentation of pwelch, it appears that you should pass 1/0.0005 (the sampling rate) and not 2*pi/0.0005. This should not affect the slope but will affect the intercept.
The dip in PSD in your simulation results looks similar to aliasing artifacts
that I have seen in my data when the original data were interpolated with a
low-order method. To make this clearer - say my original data was spaced at
0.002m, but in the course of cleaning up missing data, trying to save space, whatever,
I linearly interpolated those data onto a 0.005m spacing. The frequency response
of linear interpolation is not well-behaved, and will introduce peaks and valleys
at the high wavenumber end of your spectrum.
There are different conventions for spectral estimates - whether the wavenumber
units are 1/m, or radians/m. Single-sided spectra or double-sided spectra.
help pwelch
shows that the default settings return a one-sided spectrum, i.e. the bin for some
frequency ω will include the power density for both +ω and -ω.
You should double check that the idealized spectrum to which you are comparing
is also a one-sided spectrum. Otherwise, you'll need to half the values of your
one-sided spectrum to get values representative of the +ω side of a
two-sided spectrum.
I agree with Try Hard that it is the cyclic frequency (generally Hz, or in this case 1/m)
which should be specified to pwelch. That said, the returned frequency vector
from pwelch is also in those units. Analytical
spectral formulae are usually written in terms of angular frequency, so you'll
want to be sure that you evaluate it in terms of radians/m, but scale back to 1/m
for plotting.
I have 2 arrays of 800000 input and output data samples of a system. The system in a kind of oven that works among 0 and 10 volts. The sample time is 0.001s.
I have to identify the model of this system, but first of all, given that the data are clearly dirty, I would like to filter the noise.
How can I do it with the System Identification Toolbox of Matlab?
Moreover, how can I estimate the cutoff frequency to remove the noise?
Thank you in advance.
PS: given that this is a bit out of topic, please, post your answer here thank you.
The cutoff frequency is directly given by you sampling time or sampling frequency.
you sampling frequency is 1/(sampling time) and must at least 2 the factor of the highest frequency of interest:
http://en.wikipedia.org/wiki/Nyquist%E2%80%93Shannon_sampling_theorem
f_s = 1/T_s >= 2*f_cutOff
You can then simply to same frequency domain processing in the case you sampling frequency is realy high enough. The easiest way would to have a look at the frequency domain (with function fft() ). And check first where you have high noise components. Then filter out these components (zeroing) and then transform it back into time domain ( with function ifft() ).
Noise is modeled as a white Gaussian distribution in the simplest case. If you estimate the noise energy, you can make a dummy noise by calling
noise = A*randn(1,N);
Here, A is the amplitude and N is the sample count. then just take the fft of this signal and subtract it from the fft of input signal and take the inverse fft (ifft)
When taking fft(signal, nfft) of a signal, how does nfft change the outcome and why? Can I have a fixed value for nfft, say 2^18, or do I need to go 2^nextpow2(2*length(signal)-1)?
I am computing the power spectral density(PSD) of two signals by taking the FFT of the autocorrelation, and I want to compare the the results. Since the signals are of different lengths, I am worried if I don't fix nfft, it would make the comparison really hard!
There is no inherent reason to use a power-of-two (it just might make the processing more efficient in some circumstances).
However, to make the FFTs of two different signals "commensurate", you will indeed need to zero-pad one or other (or both) signals to the same lengths before taking their FFTs.
However, I feel obliged to say: If you need to ask this, then you're probably not at a point on the DSP learning curve where you're going to be able to do anything useful with the results. You should get yourself a decent book on DSP theory, e.g. this.
Most modern FFT implementations (including MATLAB's which is based on FFTW) now rarely require padding a signal's time series to a length equal to a power of two. However, nearly all implementations will offer better, and sometimes much much better, performance for FFT's of data vectors w/ a power of 2 length. For MATLAB specifically, padding to a power of 2 or to a length with many low prime factors will give you the best performance (N = 1000 = 2^3 * 5^3 would be excellent, N = 997 would be a terrible choice).
Zero-padding will not increase frequency resolution in your PSD, however it does reduce the bin-size in the frequency domain. So if you add NZeros to a signal vector of length N the FFT will now output a vector of length ( N + NZeros )/2 + 1. This means that each bin of frequencies will now have a width of:
Bin width (Hz) = F_s / ( N + NZeros )
Where F_s is the signal sample frequency.
If you find that you need to separate or identify two closely space peaks in the frequency domain, you need to increase your sample time. You'll quickly discover that zero-padding buys you nothing to that end - and intuitively that's what we'd expect. How can we expect more information in our power spectrum w/o adding more information (longer time series) in our input?
Best,
Paul
Is there some process that can determine / remove an unknown DC offset from a non-periodic discrete time signal?
The signal in in question has a sample rate of 25Hz and has harmonics of interest between 0.25 and 3 Hz.
I have tried using highpass filters mixed results, first I used a 10th order guassian with Fc = 0Hz, this did a good job of removing the offset but it severly attenuated the AC aswell although it did leave the signal shape intact, next I used a 168th order equilripple with a stopband at 0Hz and passband at 0.25Hz, the phase shift was too severe and the signal shape too distorted, the distortion could probably be reduced if the pass-band was brought down to 0.1Hz but this would just further increase the phase shift which I need to keep to the very minimum.
Before and after applying x - LPF(x), as suggested by Paul R
I recommend using a notch filter at DC and using filtfilt to make it zero phase.
a = [1 , -0.98]; b = [1,-1];
y = filtfilt(b,a,x);
The closer the second value of a gets to -1 the narrower your notch will be.
A DC offset means that some constant value was added to the signal (the name originates from adding a DC voltage to an analog AC signal). If the DC component is really constant (and not changing really slowly), then you don't have to design some high-order (and potentially unstable) high-pass filters - you can just subtract the average of your signal from the signal - which is, of course, a high-pass filter as well (averaging is a type of a low-pass, and '1 minus the average' is high-apss) --- but a very simple one.
If, on the other hand, you have a reason to believe that the DC component is not really a DC, but rather an AC with very low frequency, then you'd better average segments of your signal and not the signal as a whole, which is the same as using a low-pass filter with impulse response which is shorter then the length of the signal. In this case you have to make some assumptions about the "DC" component.
Rather than implementing a high pass filter directly (which can be rather tricky for very low frequencies - you end up with a large number of coefficients and various issues with stability and passband ripple etc), you might instead want to consider implementing a low pass filter which will give you an estimate of the DC offset value, and then subtract this filtered offset from your signal, i.e. rather than:
y = HPF(x)
do this:
y = x - LPF(x)
The low pass filter can probably just be quite a simple filter with a relatively small number of terms. The big advantage of this implementation is that your higher frequency components should not have any unwanted artefacts due to phase, ripple, etc, since all you are doing is subtracting an almost stationary DC value from the samples.
The only potential downside is that if the DC offset is large you may have quite a long initial settling time before the estimate of the DC offset is accurate (although this is also true of any other implementation such as a direct high pass filter of course). If you have any a priori knowledge of what the offset value is likely to be (e.g. if it doesn't change very much from run to run, and you know the value from the previous run) then you can use this to optimise the settling time, by initialising the LPF state variables to a suitable value rather than 0.
As others have said, to remove a DC offset, you can simply subtract the mean. Your signal does not need to be periodic, but it does need to be long enough to get a good estimate of the DC component.
If you still wish to go with a filtering approach, you can eliminate the severe distortion due to phase lag by using filtfilt. This function filters your timeseries once in the forwards direction and then once in the reverse direction, so that phase distortions cancel out.
You can design a symmetric FIR filter as the low-pass filter that estimates the DC and then subtract the output from your input signal. This filter has constant group-delay.
We're trying to analyse flow around circular cylinder and we have a set of Cp values that we got from wind tunnel experiment. Initially, we started off with a sample frequency of 20 Hz and tried to find the frequency of vortex shedding using FFT in matlab. We got a frequency of around 7 Hz. Next, we did the same experiment, but the only thing we changed was the sampling frequency- from 20 Hz to 200 Hz. We got the frequency of the vortex shedding to be around 70 Hz (this is where the peak is located in the graph). The graph doesn't change regardless of the Cp data that we enter. The only time the peak differs is when we change the sample frequency. It seems like the increase in the frequency of vortex shedding is proportional to the sample frequency and this doesn't seem to make sense at all. Any help regarding establishing a relation between sample frequency and vortex shedding frequency would be greatly appreaciated.
The problem you are seeing is related to "data aliasing" due to limitations of the FFT being able to detect frequencies higher than the Nyquist Frequency (half-the sampling frequency).
With data aliasing, a peak in real frequency will be centered around (real frequency modulo Nyquist frequency). In your 20 Hz sampling (assuming 70 Hz is the true frequency, that results in zero frequency which means you're not seeing the real information. One thing that can help you with this is to use FFT "windowing".
Another problem that you may be experiencing is related to noisy data generation via single-FFT measurement. It's better to take lots of data, use windowing with overlap, and make sure you have at least 5 FFTs which you average to find your result. As Steven Lowe mentioned, you should also sample at faster rates if possible. I would recommend sampling at the fastest rate your instruments can sample.
Lastly, I would recommend that you read some excerpts from Numerical Recipes in C (<-- link):
Section 12.0 -- Introduction to FFT
Section 12.1 (Discusses data aliasing)
Section 13.4 (Discusses FFT windowing)
You don't need to read the C source code -- just the explanations. Numerical Recipes for C has excellent condensed information on the subject.
If you have any more questions, leave them in the comments. I'll try to do my best in answering them.
Good luck!
this is probably not a programming problem, it sounds like an experiment-measurement problem
i think the sampling frequency has to be at least twice the rate of the oscillation frequency, otherwise you get artifacts; this might explain the difference. Note that the ratio of the FFT frequency to the sampling frequency is 0.35 in both cases. Can you repeat the experiment with higher sampling rates? I'm thinking that if this is a narrow cylinder in a strong wind, it may be vibrating/oscillating faster than the sampling rate can detect..
i hope this helps - there's a 97.6% probability that i don't know what i'm talking about ;-)
If it's not an aliasing problem, it sounds like you could be plotting the frequency response on a normalised frequency scale, which will change with sample frequency. Here's an example of a reasonably good way to plot a frequency response of a signal in Matlab:
Fs = 100;
Tmax = 10;
time = 0:1/Fs:Tmax;
omega = 2*pi*10; % 10 Hz
signal = 10*sin(omega*time) + rand(1,Tmax*Fs+1);
Nfft = 2^8;
[Pxx,freq] = pwelch(signal,Nfft,[],[],Fs)
plot(freq,Pxx)
Note that the sample frequency must be explicitly passed to the pwelch command in order to output the “real” frequency data. Otherwise, when you change the sample frequency the bin where the resonance occurs will seem to shift, which is similar to the problem you describe.
Methinks you need to do some serious reading on digital signal processing before you can even begin to understand all the nuances of the DFT (FFT). If I was you, I'd get grounded in it first with this great book:
Discrete-Time Signal Processing
If you want more of a mathematical treatment that will really expand your abilities,
Fourier Analysis by Körner
Take a look at this related question. While it was originally asked about asked about VB the responses are generically about FFTs
I tried using the frequency response code as above but it seems that I dont have the appropriate toolbox in Matlab. Is there any way to do the same thing without using fft command? So far, this is what I have:
% FFT Algorithm
Fs = 200; % Sampling frequency
T = 1/Fs; % Sample time
L = 65536; % Length of signal
t = (0:L-1)*T; % Time vector
y = data1; % Your CP values go in this vector
NFFT = 2^nextpow2(L); % Next power of 2 from length of y
Y = fft(y,NFFT)/L;
f = Fs/2*linspace(0,1,NFFT/2);
% Plot single-sided amplitude spectrum.
loglog(f,2*abs(Y(1:NFFT/2)))
title(' y(t)')
xlabel('Frequency (Hz)')
ylabel('|Y(f)|')
I think there might be something wrong with the code I am using. I'm not sure what though.
A colleague of mine has written some nice GPL-licenced functions for spectral analysis:
http://www.mecheng.adelaide.edu.au/~pvl/octave/
(Update: this code is now part of one of the Octave modules:
http://octave.svn.sourceforge.net/viewvc/octave/trunk/octave-forge/main/signal/inst/.
But it might be tricky to extract just the pieces you need from there.)
They're written for both Matlab and Octave and serve mostly as a drop-in replacement for the analogous functions in the Signal Processing Toolbox. (So the code above should still work fine.)
It may help with your data analysis; better than rolling your own with fft and the like.