Utility of loglog plots in curve fitting inverse square relationship - matlab

I have a a bunch of data that I'd like to use to find an unknown parameter in a physical equation.
I'm trying to find a parameter k to characterise the output of a hall effect sensor as a function on input voltage and distance between the sensor and the magnet. However, I've found this function to be inversely proportional to the square of the distance.
I asked my professor about how to use MATLAB to find the unknown parameter, and he told me I could try to fit it by taking the logarithm of both sides of the equation and plotting that, seen as that would make the relationship linear and thus easier to plot.
I'd have to do this in MATLAB and I'm assuming the values I measured would have to be converted by hand before being able to perform any sort of curve fitting on them.
I was wondering if doing that was worth it, and if there is a faster way of doing this.
Thanks :)

In order to easily identify the relationship, for a set input voltage, I had to take the logarithm of the measured distance and the logarithm of the respective output voltages and plot those. Fitting a line through those points then enabled me to see that the coefficient was close enough to -2, confirming the inverse square relationship.
I then did the same for different input voltages and added everything together on the same plot.

Related

Are MATLAB's lsim outputs derivatives or the state vector?

I'm trying to do a simulation of a 2-body mass-spring-damper. I've set up a state-space model that I'm pretty confident in and set an input of a displacement and velocity at the base in just one degree of freedom. Upon getting my outputs, I expected that the output vector would just be the state vector at each time step. However, when plotting the output vector corresponding to displacement for each mass in the vertical direction (the input direction), it looked much more like a velocity (0 at the extrema of the input). The plots are shown below:
When I integrated the top 2 plots, I got the following:
Now, I obviously can just accept the outputs as they are and assume I am right in my understanding. But, I want to be sure. From the documentation page:
lsim(___) also returns the time vector t used for simulation and the
state trajectories x (for state-space models only)
I'm just hoping to find out whether or not I am correct in that the output matrix columns correspond to the history of the state derivatives before I go base an analysis on a bad assumption.
I figured it out. My B-matrix expected [derivative, state,...] but I had them in the opposite order.

Estimating the error when fitting a curve with DCT and polyfit

I have a matlab script that performs curve fitting on a set of curves using polynomials of third, second and first order (using polyfit with the desired order) and also using DCT of 4,3 and 2 coefficients (invoking dct for the whole array and then truncating just the first 4,3 or 2 coeffs).
I'm able to get a graphical view of the accuracy of each curve fitting using polyval and idct for the 2 types of curve fitting, but I was wondering if there is any way of getting a numeric value of the accuracy that makes sense for both approaches (dct and polyfit).
I'm sure this is more a maths question rather than a Matlab question, but maybe there is some way to obtain a simple and elegant way in a array-based algorithm that I haven't thought of yet.
Thanks in advance for your comments!
EDIT: What about correlation? :D
In the cuve fitting tool there should be a residual that uses standard deviation. If you are interested in another way to do it maybe you should use rmse for the entire curve, scripting a function that does something like:
input args : y1 = (curve going to be fitted), y2 = (fitted curve)
For each value in y, sum up the difference y1-y2 squared
Divide with the number of entries
Provided you are now left with a number, return its square root
See http://en.wikipedia.org/wiki/Root-mean-square_deviation#Formula for more.

Matlab inverse fast fourier tansform for frequency-wavenumber field, do I need make conjugation and flip?

First I describe the physics, it is in a axisymmetric space, one sound source was placed at the original point, one sensor was placed on the axis under the source. Giving the source wave form, I try to get the sensor's waveform. all materiel parameter were known, for instance, sound speed, density.
I write the Matlab script to calculate it, by solving the sound propagation equation I can get
one function, say, A(w,k), w is frequency and k is wavenumber, this is so called frequency-wavenumber field. My matlab code like this,
discrete w and k, get a A array. first use FFT to k, get space and frequency information
then, FFT to w, get space and time information, that is the waveform at different point.
the fake code
for i_w=...
w=...
for i_k=...
k=...
M=A(w,k)
end
wave_space_freq=ifft(M)
end % here can specify the only point of the sensor
wave_space_freq=ifft(wave_space_freq)
My question is do I need to make conjugation and flip when I use IFFT,like ifft(M,0,fliplr(conj(M))) . because I saw some-others use them, but I don't understand why?
If you want a strictly real-valued result waveform (not complex with significant imaginary components), then the input to an IFFT has to be conjugate symmetric, such as:
ifft(dc_term,M,0,fliplr(conj(M))).

How to measure power spectral density in matlab?

I am trying to measure the PSD of a stochastic process in matlab, but I am not sure how to do it. I have posted the exact same question here, but I thought I might have more luck here.
The stochastic process describes wind speed, and is represented by a vector of real numbers. Each entry corresponds to the wind speed in a point in space, measured in m/s. The points are 0.0005 m apart. How do I measure and plot the PSD? Let's call the vector V. My first idea was to use
[p, w] = pwelch(V);
loglog(w,p);
But is this correct? The thing is, that I'm given an analytical expression, which the PSD should (in theory) match. By plotting it with these two lines of code, it looks all wrong. Specifically it looks as though it could need a translation and a scaling. Other than that, the shapes of the two are similar.
UPDATE:
The image above actually doesn't depict the PSD obtained by using pwelch on a single vector, but rather the mean of the PSD of 200 vectors, since these vectors stems from numerical simulations. As suggested, I have tried scaling by 2*pi/0.0005. I saw that you can actually give this information to pwelch. So I tried using the code
[p, w] = pwelch(V,[],[],[],2*pi/0.0005);
loglog(w,p);
instead. As seen below, it looks much nicer. It is, however, still not perfect. Is that just something I should expect? Taking the squareroot is not the answer, by the way. But thanks for the suggestion. For one thing, it should follow Kolmogorov's -5/3 law, which it does now (it follows the green line, which has slope -5/3). The function I'm trying to match it with is the Shkarofsky spectral density function, which is the one-dimensional Fourier transform of the Shkarofsky correlation function. Is it not possible to mark up math, here on the site?
UPDATE 2:
I have tried using [p, w] = pwelch(V,[],[],[],1/0.0005); as I was suggested. But as you can seem it still doesn't quite match up. It's hard for me to explain exactly what I'm looking for. But what I would like (or, what I expected) is that the dip, of the computed and the analytical PSD happens at the same time, and falls off with the same speed. The data comes from simulations of turbulence. The analytical expression has been fitted to actual measurements of turbulence, wherein this dip is present as well. I'm no expert at all, but as far as I know the dip happens at the small length scales, since the energy is dissipated, due to viscosity of the air.
What about using the standard equation for obtaining a PSD? I'd would do this way:
Sxx(f) = (fft(x(t)).*conj(fft(x(t))))*(dt^2);
or
Sxx = fftshift(abs(fft(x(t))))*(dt^2);
Then, if you really need, you may think of applying a windowing criterium, such as
Hanning
Hamming
Welch
which will only somehow filter your PSD.
Presumably you need to rescale the frequency (wavenumber) to units of 1/m.
The frequency units from pwelch should be rescaled, since as the documentation explains:
W is the vector of normalized frequencies at which the PSD is
estimated. W has units of rad/sample.
Off the cuff my guess is that the scaling factor is
scale = 1/0.0005/(2*pi);
or 318.3 (m^-1).
As for the intensity, it looks like taking a square root might help. Perhaps your equation reports an intensity, not PSD?
Edit
As you point out, since the analytical and computed PSD have nearly identical slopes they appear to obey similar power laws up to 800 m^-1. I am not sure to what degree you require exponents or offsets to match to be satisfied with a specific model, and I am not familiar with this particular theory.
As for the apparent inconsistency at high wavenumbers, I would point out that you are entering the domain of very small numbers and therefore (1) floating point issues and (2) noise are probably lurking. The very nice looking dip in the computed PSD on the other hand appears very real but I have no explanation for it (maybe your noise is not white?).
You may want to look at this submission at matlab central as it may be useful.
Edit #2
After inspecting documentation of pwelch, it appears that you should pass 1/0.0005 (the sampling rate) and not 2*pi/0.0005. This should not affect the slope but will affect the intercept.
The dip in PSD in your simulation results looks similar to aliasing artifacts
that I have seen in my data when the original data were interpolated with a
low-order method. To make this clearer - say my original data was spaced at
0.002m, but in the course of cleaning up missing data, trying to save space, whatever,
I linearly interpolated those data onto a 0.005m spacing. The frequency response
of linear interpolation is not well-behaved, and will introduce peaks and valleys
at the high wavenumber end of your spectrum.
There are different conventions for spectral estimates - whether the wavenumber
units are 1/m, or radians/m. Single-sided spectra or double-sided spectra.
help pwelch
shows that the default settings return a one-sided spectrum, i.e. the bin for some
frequency ω will include the power density for both +ω and -ω.
You should double check that the idealized spectrum to which you are comparing
is also a one-sided spectrum. Otherwise, you'll need to half the values of your
one-sided spectrum to get values representative of the +ω side of a
two-sided spectrum.
I agree with Try Hard that it is the cyclic frequency (generally Hz, or in this case 1/m)
which should be specified to pwelch. That said, the returned frequency vector
from pwelch is also in those units. Analytical
spectral formulae are usually written in terms of angular frequency, so you'll
want to be sure that you evaluate it in terms of radians/m, but scale back to 1/m
for plotting.

matlab interpolation

Starting from the plot of one curve, it is possible to obtain the parametric equation of that curve?
In particular, say x={1 2 3 4 5 6....} the x axis, and y = {a b c d e f....} the corresponding y axis. I have the plot(x,y).
Now, how i can obtain the equation that describe the plotted curve? it is possible to display the parametric equation starting from the spline interpolation?
Thank you
If you want to display a polynomial fit function alongside your graph, the following example should help:
x=-3:.1:3;
y=4*x.^3-5*x.^2-7.*x+2+10*rand(1,61);
p=polyfit(x,y,3); %# third order polynomial fit, p=[a,b,c,d] of ax^3+bx^2+cx+d
yfit=polyval(p,x); %# evaluate the curve fit over x
plot(x,y,'.')
hold on
plot(x,yfit,'-g')
equation=sprintf('y=%2.2gx^3+%2.2gx^2+%2.2gx+%2.2g',p); %# format string for equation
equation=strrep(equation,'+-','-'); %# replace any redundant signs
text(-1,-80,equation) %# place equation string on graph
legend('Data','Fit','Location','northwest')
Last year, I wrote up a set of three blogs for Loren, on the topic of modeling/interpolationg a curve. They may cover some of your questions, although I never did find the time to add another 3 blogs to finish the topic to my satisfaction. Perhaps one day I will get that done.
The problem is to recognize there are infinitely many curves that will interpolate a set of data points. A spline is a nice choice, because it can be made well behaved. However, that spline has no simple "equation" to write down. Instead, it has many polynomial segments, pieced together to be well behaved.
You're asking for the function/mapping between two data sets. Knowing the physics involved, the function can be derived by modeling the system. Write down the differential equations and solve it.
Left alone with just two data series, an input and an output with a 'black box' in between you may approximate the series with an arbitrary function. You may start with a polynomial function
y = a*x^2 + b*x + c
Given your input vector x and your output vector y, parameters a,b,c must be determined applying a fitness function.
There is an example of Polynomial Curve Fitting in the MathWorks documentation.
Curve Fitting Tool provides a flexible graphical user interfacewhere you can interactively fit curves and surfaces to data and viewplots. You can:
Create, plot, and compare multiple fits
Use linear or nonlinear regression, interpolation,local smoothing regression, or custom equations
View goodness-of-fit statistics, display confidenceintervals and residuals, remove outliers and assess fits with validationdata
Automatically generate code for fitting and plottingsurfaces, or export fits to workspace for further analysis