I have a function file with my differential equations, I am performing a ode23s on the function in the standard form i.e
[t,m]=ode23s('DE_function',tspan,[mA pA mB pB mC pC mD],optionsDE,p)
I obtain about 150 values/results/output for each mA and so on. My ode23s is working fine.
I have experimental dataset for the same mA and so on which i have to use to calculate the least squared error.. i am trying to do this:
a = m(:,1) - A(:,2); and so on. Here in my experimental data, I have just 20 values/results/outputs etc according to 20 time points. I have defined the same time points for the tspan as well. But since my matrices do not match in dimension, i am unable to proceed with my calculations. Is there a way to receive exactly 20 values according to the 20 time points such as 1, 2, etc in the ode23s as well, or may be a way to get and store them only.
I have been trying to find a solution for this error but unable to find anything suitable. Many thanks for any kind of suggestions and hits.
The Matlab documentation has all you need. When you call ode23 you can specify the time locations in tspan.
"Interval of integration, specified as a vector. At minimum, tspan must be a two element vector [t0 tf] specifying the initial and final times. To obtain solutions at specific times between t0 and tf, use a longer vector of the form [t0,t1,t2,...,tf]. The elements in tspan must be all increasing or all decreasing."
Related
I am trying to build a multiple linear regression in MATLAB with 20 predictors, which are categorical with 4 levels each. I am using the function "regress", like this (these are not the actual variables):
X = [ones(size(x1)) x1 x2 x3...x20];
[b,bint,r,rint] = regress(Y, X);
Before this, I transformed the vectors x1,x2...x20 in categorical variables with dummyvar.
I get this error and a lot of 0's in the b coefficients and this error:
Warning: X is rank deficient to within machine precision.
In the dummyvar documentation it is mentioned:
To use the dummy variables in a regression model, you must either delete a column (to create a reference group) or fit a regression model with no intercept term.
I tried not using the intercept ones(size(x1)) and I get the same error.
I would appreciate any input on how to solve this.
Try to simplify the problem down to the minimum working example, and then post that here, so we can reproduce it and help you through. See https://en.wikipedia.org/wiki/Rank_(linear_algebra)
for examples of rank deficiency.
I am trying to understand the following set of equations given here: https://matlabgeeks.com/tips-tutorials/modeling-with-odes-in-matlab-part-5b/
The equations are those of a chaotic Lorenz system. The tutorial is quite easy to understand but what I do not follow is how to set the number of data points to generate i.e., the length of the time series? Which parameter helps to decide to generate how many data points will be generated. Can somebody please help? I have looked into other resources as well but I could not understand. For instance, by trial and error I found that if I specify
eps = 0.000001; T = [0 45] then the number of data points are about 7000. If I want the number of data points to 10,000 I don't know what the values of these parameters should be.
As described in the article (and the previous parts 1 and 2 of the series), the sequence of sample points is generated dynamically so that each segment contributes about the same amount of truncation error towards the global error, weighted by the absolute and relative tolerances. Additionally, it uses interpolation inside the segment to produce 3 inner points so that a plot will appear curved also for large tolerances. That is, the internal segmentation is given by T(1:4:end), the other points are interpolated.
You can also prescribe your own sample times, the values there get likewise interpolated from the "dense output", the interpolations over the internally produced segmentation.
T = linspace(t0, tend, 7000);
Y = ode45('lorenz', T, Y0, options);
You could also extract the dense output via
sol = ode45('lorenz', [t0 tend], Y0, options);
and then use the provided interpolation to compute samples at arbitrary times
Y = deval(sol,T);
In Empirical error proof Runge-Kutta algorithm ... I also computed the error for the Lorenz system for a fixed-step RK method, which shows the same divergence of the solutions after a relatively short time.
I am trying to model Kuramoto ocillations in Matlab. I tried using ode45 to solve the system. I also saw someone else use the Runge-kutta method. I understand that ode45 uses the Runge-kutta method,however, the values I obtain from each are suspiciously different.
kuramoto= #(x,K,N,Omega)Omega+(K/N)*sum(sin(x*ones(1,N)-(ones(N,1)*x')))'
%Kuramoto is a model of N coupled ocilators (such as multiple radiowaves)
%The solution to the model is the phase of each ocilator
%[Kuramoto Equation][1]
theta(:,1) = 2*pi*randn(N,1);
t0 = theta(:,1);
[t,y] = ode45(#(t,y)kuramoto(theta(:,1),K,N,omega),tspan,t0);
%Runge-Kutta method
for j=1:iter
k1=kuramoto(theta(:,j),K,N,omega);
k2=kuramoto(theta(:,j)+0.5*h*k1,K,N,omega);
k3=kuramoto(theta(:,j)+0.5*h*k2,K,N,omega);
k4=kuramoto(theta(:,j)+h*k3,K,N,omega);
theta(:, j+1)=theta(:,j)+(h/6)*(k1+2*k2+2*k3+k4);
end
Both methods output a matrix with N rows(where each row represents a different oscillator) and M columns (where M represents the solution at a given time) I have ode45 provide solutions form 0 to 0.5 at 0.1 intervals. To compare the methods I subtract the matrix obtained from Runge-Kutta with the matrix obtained using ode45. Ideally, the two should have the same values and the result should be a zeor matrix but instead I get values such as:
0 -0.0003 -0.0012 -0.0027 -0.0048 -0.0076
0 0.0003 0.0012 0.0027 0.0048 0.0076
%here I have only two oscillators from t = [0.0,0.5]
There is a small difference between the two matrices (which grows at larger time intervals). But unusually the total value calculated at each time (ie. each column) is the same. This is consistent regardless of the number of oscillators.
I am unsure if this is a Math problem or programming problem (it's probably both) and I think I am calling ode45 incorrectly, but I am not sure and haven't been able to figure out what is wrong for a few days. Any help would be appreciated.
You should use the ode45 output. Runge Kutta as you have it implemented will eventually be unstable if you choose a step size that is too large. The entire point of ode45 is that it internally runs a Runge Kutta 4 and Runge Kutta 5 scheme. If the results of one integration step differ, then ode45 will reduce the time step until the results are comparable. Using the raw method like you are doing will obviously not do that.
Technically, things like ode45 are called "embedded Runge Kutta" methods. Here is one such method: https://en.wikipedia.org/wiki/Runge%E2%80%93Kutta%E2%80%93Fehlberg_method
They are efficient because the Runge Kutta methods of different order reuse a lot of the same function evaluations.
All this being said, you should find that if you reduce your time step enough, that the results are almost identical. The only reason that they differ is that ode45 is internally refining the time step when it detects that the solution may be inaccurate.
Did you actually use the line
[t,y] = ode45(#(t,y)kuramoto(theta(:,1),K,N,omega),tspan,t0);
in your running code? Then the result here is definitely wrong. Use
[t,y] = ode45(#(t,u)kuramoto(u,K,N,omega),tspan,t0);
to get results that are at least related to the RK4 integration. That is, use the declared local variable/parameter of the anonymous function in the computation of its value. (Using u instead of y or theta to not reuse a variable name that is also used in the more global scope. Could use thetalocal instead if self-documenting variable names were desired.)
PS: That the difference sums to zero is due to the fact that the derivative vectors sum to zero, so that the sum over the state vector is a constant regardless of the errors committed in applying the methods. So if you subtract the same constant from itself you end up again with zero. If the state vector only has 2 elements, the elements of the difference vector thus have to be opposite.
The following is Exercise 3 of a Numerical Analysis task I have to do as part of my university course on the subject.
Find an approximation of tomorrow's temperature based on the last 23
values of hourly temperature of your city ( Meteorological history for
Thessaloniki {The city of my univ} can be found here:
http://freemeteo.com)
You will approximate the temperature function with a polynomial of
2nd, 3rd and 4th degree, using the Least Squares method. Following
that, you will find the value of the function at the point that
interests you. Compare your approximations qualitatively and make a
note to the time and date you're doing the approximation on.
Maybe it's due to fatigue due to doing the first two tasks without break, or it's my lack of experience on numerical analysis, but I am completely stumped. I do not even know where to start.
I know it's disgusting to ask for a solution without even showing signs of effort, but I would appreciate anything. Leads, tutorials, outlines of the things I need to work on, one after the other, anything.
I'd be very much obliged to you.
NOTE: I am not able to use any MATLAB in-built approximation functions.
In general, if y is your data vector belonging to the times t, and c is the coefficient vector you're interested in, then you need to solve the linear system
Ac = y
in a least-squares sense, where
A = bsxfun(#power, t(:), 0:n)
In MATLAB you can do this with mldivide:
c = A\y(:)
Example:
>> t = 0 : 0.1 : 20; %// Define some times
>> y = pi + 0.8*t - 3.2*t.^2; %// Create some synthetic data
>> y = y + randn(size(y)); %// Add some noise for good measure
>>
>> n = 2; %// The order of the polynomial for the fit
>> A = bsxfun(#power, t(:), 0:n); %// Design matrix
>> c = A\y(:) %// Solve for the coefficient matrix
c =
3.142410118189416e+000
7.978077631488009e-001 %// Which works pretty well
-3.199865079047185e+000
But since you are not allowed to use any built-in functions, you can use this simple solution only to check your own outcomes. You'll have to write an implementation of the equations given on (for example) Wolfram's MathWorld.
I have several equations and each have their own individual frequencies and amplitudes. I would like to sum the equations together and adjust the individual phases, phase1,phase2, and phase3 to keep the total amplitude value of eq_total under a specific value like 0.8. I know I can normalize the signal or change the vertical offset, but for my purposes I need to have the amplitude controlled by changing/finding the values for just the phases in phase1,phase2, and phase3 that will limit the maximum amplitude when the equations are summed.
Note: I'm using constructive and destructive phase interference to adjust the maximum amplitude of the summed equations.
Example:
eq1=0.2*cos(2pi*t*3+phase1)+vertical offset1
eq2=0.7*cos(2pi*t*9+phase2)+vertical offset2
eq3=0.8*cos(2pi*t*5+phase3)+vertical offset3
eq_total=eq1+eq2+eq3
Is there a way to solve for phase1,phase2, and phase3 so that the amplitude of the summed signals in eq_total never goes over 0.8 by just adjusting/finding the values of phase1,phase2,and phase3?
Here's a picture of a geogebra applet I tested this idea with.
Here's the geogebra ggb file I used to edit/test idea with. (I used this to see if my idea would work) Java is required if you want to dynamically interact with the applet
http://dl.dropbox.com/u/6576402/questions/ggb/sin_find_phases_example.ggb
I'm using matlab/octave
Thanks
Your example
eq1=0.2*cos(2pi*t*3+phase1)+vertical offset1
eq2=0.7*cos(2pi*t*9+phase2)+vertical offset2
eq3=0.8*cos(2pi*t*5+phase3)+vertical offset3
eq_total=eq1+eq2+eq3
where the maximum amplitude should be less than 0.8, has infinitely many solutions. Unless you have some additional objective you'd like to achieve, I suggest that you modify the problem such that you find the combination of phase angles that has a maximum amplitude of exactly 0.8 (or 0.79, such that you're guaranteed to be below).
Furthermore only two out of three phase angles are independent; if you increase all by, say, pi/3, the solution still holds. Thus, you have only two unknowns in eq_total.
You can solve the nonlinear optimization problem using e.g. FMINSEARCH. You formulate the problem such that max(abs(eq_total(phase1,phase2))) should equal 0.79.
Thus:
%# define the vector t, verticalOffset here
%# objectiveFunction is (eq_total-0.79)^2, so the phase shifts 1 and 2 that
%# satisfy this (approximately) should guarantee that signal never exceeds 0.8
objectiveFunction = #(phase)(max(abs(0.2*cos(2*pi*t+phase(1))+0.7*cos(2*pi*t*9+phase(2))+0.8*cos(2*pi*t*5)+verticalOffset)) - 0.79)^2;
%# search for optimal phase shift, starting at no shift
solution = fminsearch(objectiveFunction,[0;0]);
EDIT
Unfortunately when I Try this code and plot the results the maximum amplitude is not 0.79 it's over 1. Am I doing something wrong? see code below t=linspace(0,1,8000); verticalOffset=0; objectiveFunction = #(phase)(max(abs(0.2*cos(2*pi*t+phase(1))+0.7*cos(2*pi*t*9+phase(2))+0.8*cos(2*pi*t*5)+verticalOffset)) - 0.79)^2; s1 = fminsearch(objectiveFunction,[0;0]) eqt=0.2*cos(2*pi*t+s1(1))+0.7*cos(2*pi*t*9+s1(2))+0.8*cos(2*pi*t*5)+verticalOffset; plot(eqt)
fminsearch will find a minimum of the objective function. Whether this solution satisfies all your conditions is something you have to test. In this case, the solution given by fminsearch with the starting value [0;0] gives a maximum of ~1.3, which is obviously not good enough. However, when you plot the maximum for a range of phase angles from 0 to 2pi, you'll see that `fminsearch didn't get stuck in a bad local minimum. Rather, there is no good solution at all (z-axis is the maximum).
If I understand you correctly, you are trying to find a phase to vary the amplitude of a signal. To my knowledge, this is not possible.
For a signal
s = A * cos (w*t + phi)
only A allows you to change the amplitude. With w you change the frequency of the signal and phi regulates the "horizontal shift".
Furthermore, I think you are missing a "moving variable" like the time t in the equation above.
Maybe this article clarifies things a little.
If you set all the vertical offsets to be equal to -1, then it solves your problem because each eq# will never be > 0, so the sum can never be >0.8.
I know that this isn't that helpful, but I'm hoping that this will help you understand your problem better.