Rate of change in matlab - matlab

i have plotted a graph between time vs theta when time increases theta decreases up to some time ofter that it started increasing now i want to find what rate it is decreasing. equation theta=exp(-t/tau) i have to find tau ? can any one help me please..

It is not entirely clear from your question where you think that your problem is. But, when I read your question, it sounds like you are trying to fit an equation to some real data. Specifically, it sounds like: (1) you have some real data, (2) only part of the data is interesting to you, and (3) for that interesting data, you want to fit it to the equation theta=exp(-t/tau).
If that is indeed what you want, then you first must find just those data points that you think should be fit with the equation. I would plot your data points and then, by eye, decide which are the ones that are relevant to you. Discard the rest.
Next, you need to fit them to your equation. Since your equation is an exponential, the easiest way to find "tau" is to convert it to a linear equation. When you do this, you get 'log(theta) = -t / tau'. Or, said similarly, log(theta) = -1/tau * t.
If you take the log of all of your theta data points and plot them versus t, you should see a straight line. If this is truly the equation that will match your data, your data points should go through log(theta) = 0.0 at t = 0.0. If so, you can find tau by evaluating the slope of the line: slope = mean(log(theta)./t). Then, tau = -1/slope.
If your data points did not go through zero, you will need to shift them by some time offset so that they do go through zero. Then you can evaluate the slope and get your tau value.
This isn't really a Matlab question, by the way. Computationally, this is a very simple problem, so if Matlab is new to you, you might be making this harder than it needs to be. It could just as easily be done in Excel (or any spreadsheet) or whatever tool might be easier to use.

Related

I have some problems with the derivative in Matlab

In MATLAB:
Using the X and Y values below, write a MATLAB function SECOND_DERIV in MATLAB. The output of the function should be the approximate value for the second derivative of the data at x, the input variable of the function.
Use the forward difference method and interpolate to get your final answer;
X=[1,1.2,1.44,1.73,2.07,2.49,2.99,3.58,4.3,5.16,6.19,7.43,8.92,10.7,12.84,15.41,18.49];
Y=[18.89,19.25,19.83,20.71,21.96,23.6,25.56,27.52,28.67,27.2,19.38,-2.05,-50.9,-152.82,-354.73,-741.48,-1465.11];​
This is my coding:
function output = SECOND_DERIV(R)
X=[1,1.2,1.44,1.73,2.07,2.49,2.99,3.58,4.3,5.16,6.19,7.43,8.92,10.7,12.84,15.41,18.49];
Y=[18.89,19.25,19.83,20.71,21.96,23.6,25.56,27.52,28.67,27.2,19.38,-2.05,-50.9,-152.82,-354.73,-741.48,-1465.11];
%forward difference method first time.
XX=X(1:end-1)
%first derivative.
dydx=diff(Y)./diff(X)
%second derivative.
dydx2=diff(dydx)
%forward difference method second time.
XXX=XX(1:end-1)
%get the second derivative from input x.
output= interp1(XXX,dydx2,x,'linear','extrap')
end
I do not know what wrong with it.
This is the result I got from my course's web
First, there is no "the" approximate value but rather only "an" approximate value among an infinite set of approximation schemes. In that sense your excercise is ill-defined (but, to be fair, there is probably something you had in the lessons, that completes information).
Using forward differences twice is almost as bad an approximation as it can get. With each forward difference you are displacing the abscissa of the preferred (central difference) approximation by half a sample distance towards the "past".
For the first difference this can be justified by the fact that you might want to stick with the original X-samples. But in the second step you introduce a second displacement by half a sample distance. In order to keep approximation error at least reasonably low, the least you can do is to correct the displacement afterwards by one sample distance towards the "future". This doesn't bring you exactly back to central differences because of non-equidistance, but it's the minimal correction that should be done for the sake of accuracy.
Hence I would replace
XXX=XX(1:end-1)
by
XXX=XX(2:end)
But again, like so many school excercises, the problem is ill-defined and it is difficult to tell from the distance, if this is what is expected from you.

Differentiating a Centred and Scaled Polyfit Fit

I have some data which I wish to model in order to be able to get relatively accurate values in the same range as the data.
To do this I used polyfit to fit a 6th order polynomial and due to my x-axis values it suggested I centred and scaled it to get a more accurate fit which I did.
However, now I want to find the derivative of this function in order to model the velocity of my model.
But I am not sure how the polyder function interacts with the scaled and fitted polyfit which I have produced. (I don't want to use the unscaled model as this is not very accurate).
Here is some code which reproduces my problem. I attempted to rescale the x values before putting them into the fit for the derivative but this still did no fix the problem.
x = 0:100;
y = 2*x.^2 + x + 1;
Fit = polyfit(x,y,2);
[ScaledFit,s,mu] = polyfit(x,y,2);
Deriv = polyder(Fit);
ScaledDeriv = polyder(ScaledFit);
plot(x,polyval(Deriv,x),'b.');
hold on
plot(x,polyval(ScaledDeriv,(x-mu(1))/mu(2)),'r.');
Here I have chosen a simple polynomial so that I could fit it accurate and produce the actual derivative.
Any help would be greatly appreciated thanks.
I am using Matlab R2014a BTW.
Edit.
Just been playing about with it and by dividing the resulting points for the differential by the standard deviation mu(2) it gave a very close result within the range -3e-13 to about 5e-13.
polyval(ScaledDeriv,(x-mu(1))/mu(2))/mu(2);
Not sure quite why this is the case, is there another more elegant way to solve this?
Edit2. Sorry for another edit but again was mucking around and found that for a large sample x = 1:1000; the deviation became much bigger up to 10. I am not sure if this is due to a bad polyfit even though it is centred and scaled or due to the funny way the derivative is plotted.
Thanks for your time
A simple application of the chain rule gives
Since by definition
it follows that
Which is exactly what you have verified numerically.
The lack of accuracy for large samples is due to the global, rather then local, Lagrange polynomial interpolation which you have done. I would suggest that you try to fit your data with splines, and obtain the derivative with fnder(). Another option is to apply the polyfit() function locally, i.e. to a moving small set of points, and then apply polyder() to all the fitted polynomials.

Detect incorrect points in a homogeneous surface

In my project i have hige surfaces of 20.000 points computed by a algorithm. This algorithm, sometimes, has an error, computing 1 or more points in an small area incorrectly.
This error can not be solved in the algorithm, but needs to be detected afterwards.
The error can be seen in the next figure:
As you can see, there is a point wrongly computed that not only breaks the full homogeneous surface, but also destroys the aestetics of the plot (wich is also important in the project.)
Sometimes it can be more than a point, in general no more than 5 or 6. The error is allways the Z axis, so no need to check X and Y
I have been squeezing my mind to find a bit "generic" algorithm to detect this poitns.
I thougth that maybe taking patches of surface and meaning the Z, then detecting the points out of the variance... but I dont think it will work allways.
Any ideas?
NOTE: I dont want someone to write code for me, just an idea.
PD: relevant code for the avobe image:
[x,y] = meshgrid([-2:.07:2]);
Z = x.*exp(-x.^2-y.^2);
subplot(1,2,1)
surf(x,y,Z,gradient(Z))
subplot(1,2,2)
Z(35,35)=Z(35,35)+0.3;
surf(x,y,Z,gradient(Z))
The standard trick is to use a Laplacian, looking for the largest outliers. (This is not unlike what Mohsen posed for an answer, but is actually a bit easier.) You could even probably do it with conv2, so it would be pretty efficient.
I could offer a few ways to implement the idea. A simple one is to use my gridfit tool, found on the File Exchange. (Gridfit essentially uses a Laplacian for its smoothing operation.) Fit the surface with all points included, then look for the single point that was perturbed the most by the fit. Exclude it, then rerun the fit, again looking for the largest outlier. (With gridfit, you can use weights to give points a zero weight, a simple way to exclude a point or list of points.) When the largest perturbation that was needed is small enough, you can decide to stop the process. A nice thing is gridfit will also impute new values for the outliers, filling in all of the holes.
A second approach is to use the Laplacian directly, in more of a filtering approach. Here, you simply compute a value at each point that is the average of each neighbor to the left, right, above, and below. The single value that is most largely in disagreement with its computed average is replaced with a new value. Or, you can use a weighted average of the new value with the old one there. Again, iterate until the process does not generate anything larger than some tolerance. (This is the basis of an old outlier detection and correction scheme that I recall from the Fortran IMSL libraries, but probably dates back to roughly 30 years ago.)
Since your functions seems to vary smoothly these abrupt changes can be detected by looking into the derivatives. You can
Take the derivative in one direction
Calculate mean and standard deviation of derivative
Find the points by looking for points that are further from mean by certain multiple of standard deviation.
Here is the code
U=diff(Z);
V=(U-mean(U(:)))/std(U(:));
surf(x(2:end,:),y(2:end,:),V)
V=[zeros(1,size(V,2)); V];
V(abs(V)<10)=0;
V=sign(V);
W=cumsum(V);
[I,J]=find(W);
outliers = [I, J];
For your example you get this plot for V with a peak at around 21.7 while second peak is at around 1.9528, so maybe a threshold of 10 is ok.
and running the code returns
outliers =
35 35
The need for cumsum is for the cases that you have a patch of points next to each other that are incorrect.

spike in my inverse fourier transform

I am trying to compare two data sets in MATLAB. To do this I need to filter the data sets by Fourier transforming the data, filtering it and then inverse Fourier transforming it.
When I inverse Fourier transform the data however I get a spike at either end of the red data set (picture shows the first spike), it should be close to zero at the start, like the blue line. I am comparing many data sets and this only happens occasionally.
I have three questions about this phenomenon. First, what may be causing it, secondly, how can I remedy it, and third, will it affect the data further along the time series or just at the beginning and end of the time series as it appears to from the picture.
Any help would be great thanks.
When using DFT you must remember the DFT assumes a Periodic Signal (As a Superposition of Harmonic Functions).
As you can see, the start point is exact continuation of the last point in harmonic function manner.
Did you perform any Zero Padding in the Spectrum Domain?
Anyhow, Windowing might reduce the Overshooting.
Knowing more about the filter and the Original data might be helpful.
If you say spike near zero frequencies, I answer check the DC component.
You seem interested by the shape, so doing
x = x - mean(x)
or
x -= mean(x)
or
x -= x.mean()
(I love numpy!)
will just constrain the dataset to begin with null amplitude at zero-frequency and to go ahead with comapring the spectra's amplitude.
(as a side-note: did you check that you approprately use fftshift and ifftshift? this has always been the source of trouble for me)
Could be the numerical equivalent of Gibbs' phenomenon. If that's correct, there's no way to remedy it except for filtering.

MATLAB's fminsearch function

I have two images I'm trying to co-register - ie, one could be of a ball in the centre of the picture, the other is of the same ball near the edge and I'm trying to find the numbed of pixels I have to move the second image so that the balls would be in the same place. (I'm actually using 3D MRI brain scans, but the principle is the same).
I've written a function that will move the ball left, right, up or down by a given number of pixels as well as another function that compares the correlation of the ball-in-the-centre image with the translated ball-at-the-edge image. When the two balls are in the same place the correlation function will return 0 and a number larger than 0 for other positions.
I'm trying to use fminsearch (documentation) to find the optimal translation for the correlation function's minimum (ie, the balls being in the same place) like so:
global reference_im unknown_im;
starting_trans = [0 0 0];
trans_vector = fminsearch(#correlate_images,starting_trans)
correlate_images.m:
function r = correlate_images(translate)
global reference_im unknown_im;
new_im = move_image(unknown_im,translate(1),translate(2),translate(3));
% This bit is unimportant to the question
% but you can see how I calculate my correlation
r = 1 - corr(reshape(new_im,[],1),reshape(reference_im,[],1));
There are two problems, firstly fminsearch insists on passing float values for the translation vector into the correlate_images function. Is there any way to inform it that only integers are necessary? (I would save a large number of cpu cycles!)
Secondly, when I run this program the resulting trans_vector is always the same as starting_trans - I assume this is because no minimum has been found, but is there another reason its just plain not working?
Many thanks!
EDIT
I've discovered what I think is the reason the output trans_vector is always the same as starting_trans. The fminsearch looks at the starting value, then a small increment in each direction from there, this small increment is always less than one, which means that the result from the correlation will be a perfect match (as the move_image will return the same as the input image for sub-pixel movements). I'm going to continue working on convincing matlab to only fminsearch over integer values!
First, I'd say that Matlab might not be the best tool for this problem. I'd look at Elastix, which is a pretty user-friendly wrapper around the registration functions in ITK. You get a variety of registration techniques, and the manuals for both programs do a good job of explaining the specifics of image registration.
Second, for this kind of simple translational registration, you can use the FFT. Forward transform both images, multiply the images together (pointwise! That is, use A .* B, not A * B, as those are different operations, and the first is what you want), and there should be a peak in the inverse transform whose offset from the origin is the translational amount you need. Numerical Recipes in C has a good explanation; here's a link to an index pdf. The speed difference between the FFT version and the direct correlation version is huge; the FFT is O(N log N), while the correlation method will be O (N * M), where M is the number of pixels in your search neighborhood. If you want to allow the entire image to be searched, then correlation becomes O (N*N), which will take much longer than the FFT version. Changing parameters from floats to integers won't solve the problem.
The reason the fminsearch function uses floats (if I can guess at the reasons behind the coders' decisions) is that for problems that aren't test problems (ie, spheres in a volume), you often need sub-pixel resolution to perform a correct registration. Take a look at the ITK documentation about the reasons behind this approach.
Third, I'd suggest that a good way to write this program in Matlab (if you still want to do so!) while still forcing integer correlations would be to avoid the fminsearch function, which will want to use floats. Try something like:
startXPos = -10; %these parameters dictate the size of your search neighborhood
startYPos = -10; %corresponds to M in the above explanation
endXPos = 10;
endYPos = 10;
optimalX = 0;
optimalY = 0;
maxCorrVal = 0;
for i=startXPos:endXPos
for j = startYPos:endYPos
%test the correlation of the two images here, where one image is shifted to another
currCorrVal = Correlate(image1, image2OffsetByiAndj);
if (currCorrVal > maxCorrVal)
maxCorrVal = currCorrVal;
optimalX = i;
optimalY = j;
end
end
end
From here, you just have to write the offset function. This way, you avoid the float problem, and you're also incrementing your translation vector (I don't see any way for that vector to move in your provided functions, which probably explains your lack of movement).
There is a very similar demo in the Image Processing Toolbox that uses the normalized cross-correlation function normxcorr2 to perform image registration. To avoid repeating the same thing, check out the demo directly:
Registering an Image Using Normalized Cross-Correlation