I am trying to find a transfer function from frequency response data using invfreqs in octave.
In principle it works, the problem is that the resulting transfer function is always fitting the highest frequencies, low frequencies are badly matched.
Trying to weight the fit-errors versus frequency doesn't work. Am I doing something wrong?
Hg = 10.^(mg/20).*exp(i*pg*pi/180);
wt(fgrps>1500) = 0;
m = 44;
n = 52;
[Bg,Ag] = invfreqs(Hg,fgrps,m,n,wt);
This is the result I get:
The result is more or less the same for different orders of the numerator and denominator polynomials. High frequencies are matched good, low frequencies are matched bad.
What can I do about this?
Thank you very much in advance!
Kind regards
Stefan
My first bet is that since frequency domain (for convenience) is most often (and from your plot also for you) shown in logarithmic scaling. Thus, if you fit, the function isn't fit like you'd "imagine", but rather scaled and then fit. On a logarithmic scale higher values are represented more often -> your fit is better there.
So what you should do is: find out what scaling is applied, and try a linear frequency scaling. Bear in mind, that this is also not a "good" idea. So try to find a frequency vector, for which you need to be close and fit with that.
Related
I have to use SVD in Matlab to obtain a reduced version of my data.
I've read that the function svds(X,k) performs the SVD and returns the first k eigenvalues and eigenvectors. There is not mention in the documentation if the data have to be normalized.
With normalization I mean both substraction of the mean value and division by the standard deviation.
When I implemented PCA, I used to normalize in such way. But I know that it is not needed when using the matlab function pca() because it computes the covariance matrix by using cov() which implicitly performs the normalization.
So, the question is. I need the projection matrix useful to reduce my n-dim data to k-dim ones by SVD. Should I perform data normalization of the train data (and therefore, the same normalization to further projected new data) or not?
Thanks
Essentially, the answer is yes, you should typically perform normalization. The reason is that features can have very different scalings, and we typically do not want to take scaling into account when considering the uniqueness of features.
Suppose we have two features x and y, both with variance 1, but where x has a mean of 1 and y has a mean of 1000. Then the matrix of samples will look like
n = 500; % samples
x = 1 + randn(n,1);
y = 1000 + randn(n,1);
svd([x,y])
But the problem with this is that the scale of y (without normalizing) essentially washes out the small variations in x. Specifically, if we just examine the singular values of [x,y], we might be inclined to say that x is a linear factor of y (since one of the singular values is much smaller than the other). But actually, we know that that is not the case since x was generated independently.
In fact, you will often find that you only see the "real" data in a signal once we remove the mean. At the extremely end, you could image that we have some feature
z = 1e6 + sin(t)
Now if somebody just gave you those numbers, you might look at the sequence
z = 1000001.54, 1000001.2, 1000001.4,...
and just think, "that signal is boring, it basically is just 1e6 plus some round off terms...". But once we remove the mean, we see the signal for what it actually is... a very interesting and specific one indeed. So long story short, you should always remove the means and scale.
It really depends on what you want to do with your data. Centering and scaling can be helpful to obtain principial components that are representative of the shape of the variations in the data, irrespective of the scaling. I would say it is mostly needed if you want to further use the principal components itself, particularly, if you want to visualize them. It can also help during classification since your scores will then be normalized which may help your classifier. However, it depends on the application since in some applications the energy also carries useful information that one should not discard - there is no general answer!
Now you write that all you need is "the projection matrix useful to reduce my n-dim data to k-dim ones by SVD". In this case, no need to center or scale anything:
[U,~] = svd(TrainingData);
RecudedData = U(:,k)'*TestData;
will do the job. The svds may be worth considering when your TrainingData is huge (in both dimensions) so that svd is too slow (if it is huge in one dimension, just apply svd to the gram matrix).
It depends!!!
A common use in signal processing where it makes no sense to normalize is noise reduction via dimensionality reduction in correlated signals where all the fearures are contiminated with a random gaussian noise with the same variance. In that case if the magnitude of a certain feature is twice as large it's snr is also approximately twice as large so normalizing the features makes no sense since it would just make the parts with the worse snr larger and the parts with the good snr smaller. You also don't need to subtract the mean in that case (like in PCA), the mean (or dc) isn't different then any other frequency.
I try to find the strongest frequency component with Matlab. It works, but if the datapoints and periods are not nicely aligned, I need to zero-pad my data to increase the FFT resolution. So far so good.
The problem is that, when I zero-pad too much, the frequency with the maximal power changes, even if everything is aligned nicely and I would expect a clear result.
This is my MWE:
Tmax = 1024;
resolution = 1024;
period = 512;
X = linspace(0,Tmax,resolution);
Y = sin(2*pi*X/period);
% N_fft = 2^12; % still fine, max_period is 512
N_fft = 2^13; % now max_period is 546.1333
F = fft(Y,N_fft);
power = abs(F(1:N_fft/2)).^2;
dt = Tmax/resolution;
freq = (0:N_fft/2-1)/N_fft/dt;
[~, ind] = max(power);
max_period = 1/freq(ind)
With zero-padding up to 2^12 everything works fine, but when I zero-pad to 2^13, I get a wrong result. It seems like too much zero-padding shifts the spectrum, but I doubt it. I rather expect a bug in my code, but I cannot find it. What am I doing wrong?
EDIT: It seems like the spectrum is skewed towards the low frequencies. Zero-padding just makes this visible:
Why is my spectrum skewed? Shouldn't it be symmetric?
Here is a graphic explanation of what you're doing wrong (which is mostly a resolution problem).
EDIT: this shows the power for each fft data point, mapped to the indices of the 2^14 dataset. That is, the indices for the 2^13 data numbered 1,2,3 map to 1,3,5 on this graph; the indices for 2^12 data numbered 1,2,3 map to 1,5,9; and so on.
.
You can see that the "true" value should in fact not be 512 -- your indexing is off by 1 or a fraction of 1.
Its not a bug in your code. It has to do with the properties of the DFT (and thus the FFT, which is merely a fast version of the DFT).
When you zero-pad, you add frequency resolution, particularly on the lower end.
Here you use a sine wave as test, so you are basically convolving a finite length sine with finite sines and cosines (see here https://en.wikipedia.org/wiki/Fast_Fourier_transform details), which have almost the same or lower frequency.
If you were doing a "proper" fft, i.e. doing integrals from -inf to +inf, even those low frequency components would give you zero coefficients for the FFT, but since you are doing finite sums, the result of those convolutions is not zero and hence the actual computed fourier transform is inaccurate.
TL;DR: Use a better window function!
The long version:
After searching further, I finally found the explanation. Neither is indexing the problem, nor the additional low frequency components added by the zero-padding. The frequency response of the rectangular window, combined with the negative frequency components is the culprit. I found out on this website explaining window functions.
I made more plots to explain:
Top: The frequency response without windowing: two delta peaks, one at the positive and one at the negative frequency. I always plotted the positive part, since I didn't expect to need the negative frequency components. Middle: The frequency response of the rectangular window function. It is relatively broad, but I didn't care, because I thought I'd have only a single peak. Bottom: The frequency response of the zero-padded signal. In time domain, this is the multiplication of window function and sine-wave. In frequency domain, this amounts to the convolution of the frequency response of the window function with the frequency response of the perfect sine. Since there are two peaks, the relatively broad frequency responses of the window overlap significantly, leading to a skewed spectrum and therefore a shifted peak.
The solution: A way to circumvent this is to use a proper window function, like a Hamming window, to have a much smaller frequency response of the window, leading to less overlap.
I have some data which I wish to model in order to be able to get relatively accurate values in the same range as the data.
To do this I used polyfit to fit a 6th order polynomial and due to my x-axis values it suggested I centred and scaled it to get a more accurate fit which I did.
However, now I want to find the derivative of this function in order to model the velocity of my model.
But I am not sure how the polyder function interacts with the scaled and fitted polyfit which I have produced. (I don't want to use the unscaled model as this is not very accurate).
Here is some code which reproduces my problem. I attempted to rescale the x values before putting them into the fit for the derivative but this still did no fix the problem.
x = 0:100;
y = 2*x.^2 + x + 1;
Fit = polyfit(x,y,2);
[ScaledFit,s,mu] = polyfit(x,y,2);
Deriv = polyder(Fit);
ScaledDeriv = polyder(ScaledFit);
plot(x,polyval(Deriv,x),'b.');
hold on
plot(x,polyval(ScaledDeriv,(x-mu(1))/mu(2)),'r.');
Here I have chosen a simple polynomial so that I could fit it accurate and produce the actual derivative.
Any help would be greatly appreciated thanks.
I am using Matlab R2014a BTW.
Edit.
Just been playing about with it and by dividing the resulting points for the differential by the standard deviation mu(2) it gave a very close result within the range -3e-13 to about 5e-13.
polyval(ScaledDeriv,(x-mu(1))/mu(2))/mu(2);
Not sure quite why this is the case, is there another more elegant way to solve this?
Edit2. Sorry for another edit but again was mucking around and found that for a large sample x = 1:1000; the deviation became much bigger up to 10. I am not sure if this is due to a bad polyfit even though it is centred and scaled or due to the funny way the derivative is plotted.
Thanks for your time
A simple application of the chain rule gives
Since by definition
it follows that
Which is exactly what you have verified numerically.
The lack of accuracy for large samples is due to the global, rather then local, Lagrange polynomial interpolation which you have done. I would suggest that you try to fit your data with splines, and obtain the derivative with fnder(). Another option is to apply the polyfit() function locally, i.e. to a moving small set of points, and then apply polyder() to all the fitted polynomials.
I am trying to measure the PSD of a stochastic process in matlab, but I am not sure how to do it. I have posted the exact same question here, but I thought I might have more luck here.
The stochastic process describes wind speed, and is represented by a vector of real numbers. Each entry corresponds to the wind speed in a point in space, measured in m/s. The points are 0.0005 m apart. How do I measure and plot the PSD? Let's call the vector V. My first idea was to use
[p, w] = pwelch(V);
loglog(w,p);
But is this correct? The thing is, that I'm given an analytical expression, which the PSD should (in theory) match. By plotting it with these two lines of code, it looks all wrong. Specifically it looks as though it could need a translation and a scaling. Other than that, the shapes of the two are similar.
UPDATE:
The image above actually doesn't depict the PSD obtained by using pwelch on a single vector, but rather the mean of the PSD of 200 vectors, since these vectors stems from numerical simulations. As suggested, I have tried scaling by 2*pi/0.0005. I saw that you can actually give this information to pwelch. So I tried using the code
[p, w] = pwelch(V,[],[],[],2*pi/0.0005);
loglog(w,p);
instead. As seen below, it looks much nicer. It is, however, still not perfect. Is that just something I should expect? Taking the squareroot is not the answer, by the way. But thanks for the suggestion. For one thing, it should follow Kolmogorov's -5/3 law, which it does now (it follows the green line, which has slope -5/3). The function I'm trying to match it with is the Shkarofsky spectral density function, which is the one-dimensional Fourier transform of the Shkarofsky correlation function. Is it not possible to mark up math, here on the site?
UPDATE 2:
I have tried using [p, w] = pwelch(V,[],[],[],1/0.0005); as I was suggested. But as you can seem it still doesn't quite match up. It's hard for me to explain exactly what I'm looking for. But what I would like (or, what I expected) is that the dip, of the computed and the analytical PSD happens at the same time, and falls off with the same speed. The data comes from simulations of turbulence. The analytical expression has been fitted to actual measurements of turbulence, wherein this dip is present as well. I'm no expert at all, but as far as I know the dip happens at the small length scales, since the energy is dissipated, due to viscosity of the air.
What about using the standard equation for obtaining a PSD? I'd would do this way:
Sxx(f) = (fft(x(t)).*conj(fft(x(t))))*(dt^2);
or
Sxx = fftshift(abs(fft(x(t))))*(dt^2);
Then, if you really need, you may think of applying a windowing criterium, such as
Hanning
Hamming
Welch
which will only somehow filter your PSD.
Presumably you need to rescale the frequency (wavenumber) to units of 1/m.
The frequency units from pwelch should be rescaled, since as the documentation explains:
W is the vector of normalized frequencies at which the PSD is
estimated. W has units of rad/sample.
Off the cuff my guess is that the scaling factor is
scale = 1/0.0005/(2*pi);
or 318.3 (m^-1).
As for the intensity, it looks like taking a square root might help. Perhaps your equation reports an intensity, not PSD?
Edit
As you point out, since the analytical and computed PSD have nearly identical slopes they appear to obey similar power laws up to 800 m^-1. I am not sure to what degree you require exponents or offsets to match to be satisfied with a specific model, and I am not familiar with this particular theory.
As for the apparent inconsistency at high wavenumbers, I would point out that you are entering the domain of very small numbers and therefore (1) floating point issues and (2) noise are probably lurking. The very nice looking dip in the computed PSD on the other hand appears very real but I have no explanation for it (maybe your noise is not white?).
You may want to look at this submission at matlab central as it may be useful.
Edit #2
After inspecting documentation of pwelch, it appears that you should pass 1/0.0005 (the sampling rate) and not 2*pi/0.0005. This should not affect the slope but will affect the intercept.
The dip in PSD in your simulation results looks similar to aliasing artifacts
that I have seen in my data when the original data were interpolated with a
low-order method. To make this clearer - say my original data was spaced at
0.002m, but in the course of cleaning up missing data, trying to save space, whatever,
I linearly interpolated those data onto a 0.005m spacing. The frequency response
of linear interpolation is not well-behaved, and will introduce peaks and valleys
at the high wavenumber end of your spectrum.
There are different conventions for spectral estimates - whether the wavenumber
units are 1/m, or radians/m. Single-sided spectra or double-sided spectra.
help pwelch
shows that the default settings return a one-sided spectrum, i.e. the bin for some
frequency ω will include the power density for both +ω and -ω.
You should double check that the idealized spectrum to which you are comparing
is also a one-sided spectrum. Otherwise, you'll need to half the values of your
one-sided spectrum to get values representative of the +ω side of a
two-sided spectrum.
I agree with Try Hard that it is the cyclic frequency (generally Hz, or in this case 1/m)
which should be specified to pwelch. That said, the returned frequency vector
from pwelch is also in those units. Analytical
spectral formulae are usually written in terms of angular frequency, so you'll
want to be sure that you evaluate it in terms of radians/m, but scale back to 1/m
for plotting.
I'm having fitting issues with nlinfit. I can't seem to figure out how to improve the fit. Decreasing TolX or TolFun has not changed the value in coeffs.
model = #(a,x) 1./(1 + a*x.^2);
model0 = [1e13];
opts = statset('TolX', 1e-25, 'TolFun', 1e-25);
coeffs = nlinfit(freqData, noiseData, model, model0, opts);
Here's my fit.
http://i.imgur.com/v1dkd4X.png
it seems like you are dealing with very small numbers so there might be a floating point precision issue. Why won't you transform the expression to a different from, then fit, then inverse transform?
For example:
take 1/model as the transformation, now you have just a simple polynomial fitting,
model_new=(x,a)=1+a*x.^2
where you can use polyfit and polyval, then take 1/result ...
I fit simulated data that looks similar to yours, without scaling:
The trick is to inspect your data - your signal amplitude is dropping from ~1.5 to ~1.0 between x~40 and ~150. Yet if you inspect the function it's clear that its value should not drop below 1, so it cannot model the data properly.
This data is better fit by including an initial amplitude:
model_new = #(a,x) a(1)./(1 + a(2)*x.^2);
Looking at the fitting function plotted onto your data it looks like you also include a scaling parameter somewhere.
Including an amplitude parameter improves on the original function, but is not necessarily safe: your data is noisy, and not dropping by much, so you can expect your uncertainties (and correlations) to be large.
Scaling the data before fitting probably would not really help here, since you don't have data down to x=0 and don't know what an appropriate scaling factor should be.