Gaps In Plot Of Piecewise Function in Matlab - matlab

I want to plot a piecewise function, but I don't want any gaps to appear
at the junctures, for example:
t=[1:8784];
b=(26.045792 + 13.075558*sin(0.0008531214*t - 2.7773943)).*((heaviside(t-2184))-(heaviside(t-7440)));
plot(b,'r','LineWidth', 1.5);grid on
there should not be any gaps appearing in
the plot between the three intervals , but they do.
I want the graph to be continueous without gaps.
Any suggestions on how to achieve that.
Thanks in advance.
EDIT
Actually, my aim is to find the carrier function colored by yellow in the figure below. I divide the whole interval into 3 intervals: 1-constant 2-sinusoidal 3- constant, then I want to find the overall function from the these three functions

Of course there are "gaps". The composite function is identically zero for all t<2184, and for all t>7440. The relationships can only be non-zero inside of that interval. And you have not chosen a function that is zero at the endpoints, so how can you expect there not to be "gaps"?
What values does your function take on at the endpoints of the interval?
>> t = [2184 7440];
>> (26.045792 + 13.075558*sin(0.0008531214*t - 2.7773943))
ans =
15.689 20.616
So look at the hat function part of this. I'll be lazy and use ezplot.
>> ezplot(#(t) ((heaviside(t-2184))-(heaviside(t-7440))),[0,8784])
Now, combine this, multiplying it by a trig piece, and of course the result is identically zero outside of that domain.
>> ezplot(#(t) (26.045792 + 13.075558*sin(0.0008531214*t - 2.7773943)).*((heaviside(t-2184))-(heaviside(t-7440))),[0,8784])
But if your goal is some sort of continuous function across the two chosen points in the hat function, you need to chose the trig part such that it is zero at those same two points. Mathematics is not spelled mathemagics. Wishing that you get a continuous function will not make it so.
So is your real question how to chose that internal piece (segment) as one such that the final result is continuous? If so, then we need to know why you have chosen the arbitrary constants in there. Surely these numbers, {26.045792, 13.075558, 0.0008531214, 2.7773943} all must have some significance to you. And if they are important, then how can we possibly make the result a continuous function?
Perhaps, and I'm just guessing here, you want some other result out of this, such that the function is not identically zero outside of those bounds. Perhaps you wish to extrapolate as a constant function outside of those points. But to help you, you must help us.

Related

Clean methodology for running a function for a large set of input parameters (in Matlab)

I have a differential equation that's a function of around 30 constants. The differential equation is a system of (N^2+1) equations (where N is typically 4). Solving this system produces N^2+1 functions.
Often I want to see how the solution of the differential equation functionally depends on constants. For example, I might want to plot the maximum value of one of the output functions and see how that maximum changes for each solution of the differential equation as I linearly increase one of the input constants.
Is there a particularly clean method of doing this?
Right now I turn my differential-equation-solving script into a large function that returns an array of output functions. (Some of the inputs are vectors & matrices). For example:
for i = 1:N
[OutputArray1(i, :), OutputArray2(i, :), OutputArray3(i, :), OutputArray4(i, :), OutputArray5(i, :)] = DE_Simulation(Parameter1Array(i));
end
Here I loop through the function. The function solves a differential equation, and then returns the set of solution functions for that input parameter, and then each is appended as a row to a matrix.
There are a few issues I have with my method:
If I want to see the solution to the differential equation for a different parameter, I have to redefine the function so that it is an input of one of the thirty other parameters. For the sake of code readability, I cannot see myself explicitly writing all of the input parameters as individual inputs. (Although I've read that structures might be helpful here, but I'm not sure how that would be implemented.)
I typically get lost in parameter space and often have to update the same parameter across multiple scripts. I have a script that runs the differential-equation-solving function, and I have a second script that plots the set of simulated data. (And I will save the local variables to a file so that I can load them explicitly for plotting, but I often get lost figuring out which file is associated with what set of parameters). The remaining parameters that are not in the input of the function are inside the function itself. I've tried making the parameters global, but doing so drastically slows down the speed of my code. Additionally, some of the inputs are arrays I would like to plot and see before running the solver. (Some of the inputs are time-dependent boundary conditions, and I often want to see what they look like first.)
I'm trying to figure out a good method for me to keep track of everything. I'm trying to come up with a smart method of saving generated figures with a file tag that displays all the parameters associated with that figure. I can save such a file as a notepad file with a generic tagging-number that's listed in the title of the figure, but I feel like this is an awkward system. It's particularly awkward because it's not easy to see what's different about a long list of 30+ parameters.
Overall, I feel as though what I'm doing is fairly simple, yet I feel as though I don't have a good coding methodology and consequently end up wasting a lot of time saving almost-identical functions and scripts to solve fairly simple tasks.
It seems like what you really want here is something that deals with N-D arrays instead of splitting up the outputs.
If all of the OutputArray_ variables have the same number of rows, then the line
for i = 1:N
[OutputArray1(i, :), OutputArray2(i, :), OutputArray3(i, :), OutputArray4(i, :), OutputArray5(i, :)] = DE_Simulation(Parameter1Array(i));
end
seems to suggest that what you really want your function to return is an M x K array (where in this case, K = 5), and you want to pack that output into an M x K x N array. That is, it seems like you'd want to refactor your DE_Simulation to give you something like
for i = 1:N
OutputArray(:,:,i) = DE_Simulation(Parameter1Array(i));
end
If they aren't the same size, then a struct or a table is probably the best way to go, as you could assign to one element of the struct array per loop iteration or one row of the table per loop iteration (the table approach would assume that the size of the variables doesn't change from iteration to iteration).
If, for some reason, you really need to have these as separate outputs (and perhaps later as separate inputs), then what you probably want is a cell array. In that case you'd be able to deal with the variable number of inputs doing something like
for i = 1:N
[OutputArray{i, 1:K}] = DE_Simulation(Parameter1Array(i));
end
I hesitate to even write that, though, because this almost certainly seems like the wrong data structure for what you're trying to do.

GUI solving equations project

I have a Matlab project in which I need to make a GUI that receives two mathematical functions from the user. I then need to find their intersection point, and then plot the two functions.
So, I have several questions:
Do you know of any algorithm I can use to find the intersection point? Of course I prefer one to which I can already find a Matlab code for in the internet. Also, I prefer it wouldn't be the Newton-Raphson method.
I should point out I'm not allowed to use built in Matlab functions.
I'm having trouble plotting the functions. What I basically did is this:
fun_f = get(handles.Function_f,'string');
fun_g = get(handles.Function_g,'string');
cla % To clear axes when plotting new functions
ezplot(fun_f);
hold on
ezplot(fun_g);
axis ([-20 20 -10 10]);
The problem is that sometimes, the axes limits do not allow me to see the other function. This will happen, if, for example, I will have one function as log10(x) and the other as y=1, the y=1 will not be shown.
I have already tried using all the axis commands but to no avail. If I set the limits myself, the functions only exist in certain limits. I have no idea why.
3 . How do I display numbers in a static text? Or better yet, string with numbers?
I want to display something like x0 = [root1]; x1 = [root2]. The only solution I found was turning the roots I found into strings but I prefer not to.
As for the equation solver, this is the code I have so far. I know it is very amateurish but it seemed like the most "intuitive" way. Also keep in mind it is very very not finished (for example, it will show me only two solutions, I'm not so sure how to display multiple roots in one static text as they are strings, hence question #3).
function [Sol] = SolveEquation(handles)
fun_f = get(handles.Function_f,'string');
fun_g = get(handles.Function_g,'string');
f = inline(fun_f);
g = inline(fun_g);
i = 1;
Sol = 0;
for x = -10:0.1:10;
if (g(x) - f(x)) >= 0 && (g(x) - f(x)) < 0.01
Sol(i) = x;
i = i + 1;
end
end
solution1 = num2str(Sol(1));
solution2 = num2str(Sol(2));
set(handles.roots1,'string',solution1);
set(handles.roots2,'string',solution2);
The if condition is because the subtraction will never give me an absolute zero, and this seems to somewhat solve it, though it's really not perfect, sometimes it will give me more than two very similar solutions (e.g 1.9 and 2).
The range of x is arbitrary, chosen by me.
I know this is a long question, so I really appreciate your patience.
Thank you very much in advance!
Question 1
I think this is a more robust method for finding the roots given data at discrete points. Looking for when the difference between the functions changes sign, which corresponds to them crossing over.
S=sign(g(x)-f(x));
h=find(diff(S)~=0)
Sol=x(h);
If you can evaluate the function wherever you want there are more methods you can use, but it depends on the size of the domain and the accuracy you want as to what is best. For example, if you don't need a great deal of accurac, your f and g functions are simple to calculate, and you can't or don't want to use derivatives, you can get a more accurate root using the same idea as the first code snippet, but do it iteratively:
G=inline('sin(x)');
F=inline('1');
g=vectorize(G);
f=vectorize(F);
tol=1e-9;
tic()
x = -2*pi:.001:pi;
S=sign(g(x)-f(x));
h=find(diff(S)~=0); % Find where two lines cross over
Sol=zeros(size(h));
Err=zeros(size(h));
if ~isempty(h) % There are some cross-over points
for i=1:length(h) % For each point, improve the approximation
xN=x(h(i):h(i)+1);
err=1;
while(abs(err)>tol) % Iteratively improve aproximation
S=sign(g(xN)-f(xN));
hF=find(diff(S)~=0);
xN=xN(hF:hF+1);
[~,I]=min(abs(f(xN)-g(xN)));
xG=xN(I);
err=f(xG)-g(xG);
xN=linspace(xN(1),xN(2),15);
end
Sol(i)=xG;
Err(i)=f(xG)-g(xG);
end
else % No crossover points - lines could meet at tangents
[h,I]=findpeaks(-abs(g(x)-f(x)));
Sol=x(I(abs(f(x(I))-g(x(I)))<1e-5));
Err=f(Sol)-g(Sol)
end
% We also have to check each endpoint
if abs(f(x(end))-g(x(end)))<tol && abs(Sol(end)-x(end))>1e-12
Sol=[Sol x(end)];
Err=[Err g(x(end))-f(x(end))];
end
if abs(f(x(1))-g(x(1)))<tol && abs(Sol(1)-x(1))>1e-12
Sol=[x(1) Sol];
Err=[g(x(1))-f(x(1)) Err];
end
toc()
Sol
Err
This will "zoom" in to the region around each suspected root, and iteratively improve the accuracy. You can tweak the parameters to see whether they give better behaviour (the tolerance tol, the 15, number of new points to generate, could be higher probably).
Question 2
You would probably be best off avoiding ezplot, and using plot, which gives you greater control. You can vectorise inline functions so that you can evaluate them like anonymous functions, as I did in the previous code snippet, using
f=inline('x^2')
F=vectorize(f)
F(1:5)
and this should make plotting much easier:
plot(x,f(x),'b',Sol,f(Sol),'ro',x,g(x),'k',Sol,G(Sol),'ro')
Question 3
I'm not sure why you don't want to display your roots as strings, what's wrong with this:
text(xPos,yPos,['x0=' num2str(Sol(1))]);

matlab zplane function: handles of vectors

I'm interested in understanding the variety of zeroes that a given function produces with the ultimate goal of identifying the what frequencies are passed in high/low pass filters. My idea is that finding the lowest value zero of a filter will identify the passband for a LPF specifically. I'm attempting to use the [hz,hp,ht] = zplane(z,p) function to do so.
The description for that function reads "returns vectors of handles to the zero lines, hz". Could someone help me with what a vector of a handle is and what I do with one to be able to find the various zeros?
For example, a simple 5-point running average filter:
runavh = (1/5) * ones(1,5);
using zplane(runavh) gives an acceptable pole/zero plot, but running the [hz,hp,ht] = zplane(z,p) function results in hz=175.1075. I don't know what this number represents and how to use it.
Many thanks.
Using the get command, you can find out things about the data.
For example, type G=get(hz) to get a list of properties of the zero lines. Then the XData is given by G.XData, i.e. X=G.XData.
Alternatively, you can only pull out the data you want
X=get(hz,'XData')
Hope that helps.

function parameters in matlab wander off after curve fitting

first a little background. I'm a psychology student so my background in coding isn't on par with you guys :-)
My problem is as follow and the most important observation is that curve fitting with 2 different programs gives completly different results for my parameters, altough my graphs stay the same. The main program we have used to fit my longitudinal data is kaleidagraph and this should be seen as kinda the 'golden standard', the program I'm trying to modify is matlab.
I was trying to be smart and wrote some code (a lot at least for me) and the goal of that code was the following:
1. Taking an individual longitudinal datafile
2. curve fitting this data on a non-parametric model using lsqcurvefit
3. obtaining figures and the points where f' and f'' are zero
This all worked well (woohoo :-)) but when I started comparing the function parameters both programs generate there is a huge difference. The kaleidagraph program stays close to it's original starting values. Matlab wanders off and sometimes gets larger by a factor 1000. The graphs stay however more or less the same in both situations and both fit the data well. However it would be lovely if I would know how to make the matlab curve fitting more 'conservative' and more located near it's original starting values.
validFitPersons = true(nbValidPersons,1);
for i=1:nbValidPersons
personalData = data{validPersons(i),3};
personalData = personalData(personalData(:,1)>=minAge,:);
% Fit a specific model for all valid persons
try
opts = optimoptions(#lsqcurvefit, 'Algorithm', 'levenberg-marquardt');
[personalParams,personalRes,personalResidual] = lsqcurvefit(heightModel,initialValues,personalData(:,1),personalData(:,2),[],[],opts);
catch
x=1;
end
Above is a the part of the code i've written to fit the datafiles into a specific model.
Below is an example of a non-parametric model i use with its function parameters.
elseif strcmpi(model,'jpa2')
% y = a.*(1-1/(1+(b_1(t+e))^c_1+(b_2(t+e))^c_2+(b_3(t+e))^c_3))
heightModel = #(params,ages) abs(params(1).*(1-1./(1+(params(2).* (ages+params(8) )).^params(5) +(params(3).* (ages+params(8) )).^params(6) +(params(4) .*(ages+params(8) )).^params(7) )));
modelStrings = {'a','b1','b2','b3','c1','c2','c3','e'};
% Define initial values
if strcmpi('male',gender)
initialValues = [176.76 0.339 0.1199 0.0764 0.42287 2.818 18.52 0.4363];
else
initialValues = [161.92 0.4173 0.1354 0.090 0.540 2.87 14.281 0.3701];
end
I've tried to mimick the curve fitting process in kaleidagraph as good as possible. There I've found they use the levenberg-marquardt algorithm which I've selected. However results still vary and I don't have any more clues about how I can change this.
Some extra adjustments:
The idea for this code was the following:
I'm trying to compare different fitting models (they are designed for this purpose). So what I do is I have 5 models with different parameters and different starting values ( the second part of my code) and next I have the general curve fitting file. Since there are different models it would be interesting if I could put restrictions into how far my starting values could wander off.
Anyone any idea how this could be done?
Anybody willing to help a psychology student?
Cheers
This is a common issue when dealing with non-linear models.
If I were, you, I would try to check if you can remove some parameters from the model in order to simplify it.
If you really want to keep your solution not too far from the initial point, you can use upper bounds and lower bounds for each variable:
x = lsqcurvefit(fun,x0,xdata,ydata,lb,ub)
defines a set of lower and upper bounds on the design variables in x so that the solution is always in the range lb ≤ x ≤ ub.
Cheers
You state:
I'm trying to compare different fitting models (they are designed for
this purpose). So what I do is I have 5 models with different
parameters and different starting values ( the second part of my code)
and next I have the general curve fitting file.
You will presumably compare the statistics from fits with different models, to see whether reductions in the fitting error are unlikely to be due to chance. You may want to rely on that comparison to pick the model that not only fits your data suitably but is also simplest (which is often referred to as the principle of parsimony).
The problem is really with the model you have shown resulting in correlated parameters and therefore overfitting, as mentioned by #David. Again, this should be resolved when you compare different models and find that some do just as well (statistically speaking) even though they involve fewer parameters.
edit
To drive the point home regarding the problem with the choice of model, here are (1) results of a trial fit using simulated data (2) the correlation matrix of the parameters in graphical form:
Note that absolute values of the correlation close to 1 indicate strongly correlated parameters, which is highly undesirable. Note also that the trend in the data is practically linear over a long portion of the dataset, which implies that 2 parameters might suffice over that stretch, so using 8 parameters to describe it seems like overkill.

Suppress kinks in a plot matlab

I have a csv file which contains data like below:[1st row is header]
Element,State,Time
Water,Solid,1
Water,Solid,2
Water,Solid,3
Water,Solid,4
Water,Solid,5
Water,Solid,2
Water,Solid,3
Water,Solid,4
Water,Solid,5
Water,Solid,6
Water,Solid,7
Water,Solid,8
Water,Solid,7
Water,Solid,6
Water,Solid,5
Water,Solid,4
Water,Solid,3
The similar pattern is repeated for State: "Solid" replaced with Liquid and Gas.
And moreover the Element "Water" can be replaced by some other element too.
Time as Integer's are in seconds (to simplify) but can be any real number.
Additionally there might by some comment line starting with # in between the file.
Problem Statement: I want to eliminate the first dip in Time values and smooth out using some quadratic or cubic or polynomial interpolation [please notice the first change from 5->2 --->8. I want to replace these numbers to intermediate values giving a gradual/smooth increase from 5--->8].
And I wish this to be done for all the combinations of Elements and States.
Is this possible through some sort of coding in Matlab etc ?
Any Pointers will be helpful !!
Thanks in advance :)
You can use the interp1 function for 1D-interpolation. The syntax is
yi = interp1(x,y,xi,method)
where x are your original coordinates, y are your original values, xi are the coordinates at which you want the values to be interpolated at and yi are the interpolated values. method can be 'spline' (cubic spline interpolation), 'pchip' (piece-wise Hermite), 'cubic' (cubic polynomial) and others (see the documentation for details).
You have alot of options here, it really depends on the nature of your data, but I would start of with a simple moving average (MA) filter (which replaces each data point with the average of the neighboring data points), and see were that takes me. It's easy to implement, and fine-tuning the MA-span a couple of times on some sample data is usually enough.
http://www.mathworks.se/help/curvefit/smoothing-data.html
I would not try to fit a polynomial to the entire data set unless I really needed to compress it, (but to do so you can use the polyfit function).