How do I smooth data in maple? - maple

In maple, what is the best way to smooth a data set, similar to one shown in the diagram, to get a regular(smooth) curve without affecting overall statistical properties? I am using Maple 16.

I would recommend having a look at the data smoothing commands in the Statistics package. Something like an exponential smoothing model could be applied to your data in order to smooth out the trend line.
If you have a recent copy of Maple, you can experiment with this using something like the following:
with(Statistics):
Z := Sample(Normal(0, 1), 50): #Generate some data
Y := CumulativeSum(Z):
ESmodel := Constant -> ExponentialSmoothing(Y, 0.1*Constant):
The Explore command creates an interface where you can try out different values for a smoothing constant:
Explore( plots[display]( LineChart(Y, color=blue),
LineChart(ESmodel(Constant), thickness=3, color=red),
gridlines=true),
parameters=[Constant=1..10] );

Related

How to apply Sobel Filter in frequency domain [Matlab]

As suggested in the title i wish to apply a sobel filter in freequency domain using matlab.
The main problem here is that the sobel filters matrix size is far less than the original image.
Is there a way i could perhaps apply the filter using the method i used.
I'd prefer not to write more functions or lengthy codes.
global curr_freequency;
global trans_freequency;
og_img=getimage(handles.org_img);
[m,n]=size(og_img);
h=size(og_img,1);
w=size(og_img,2);
[x,y]=meshgrid(-floor(w/2):floor((w-1)/2),-floor(h/2):floor((h-1)/2));
allitems=handles.SharpFilters.String;
selectedindex=handles.SharpFilters.Value;
selectedItem=allitems{selectedindex};
filt_val=str2num(handles.SharpMaskSize.String);
if(selectedItem=="Butterworth High Pass")
order=str2num(handles.order_of_filter.String);
filt=1./(1.+(filt_val./(x.^2+y.^2).^0.5).^(2.*order));
final_img=curr_freequency.*filt;
trans_freequency=final_img;
x=fftshow(final_img);
axes(handles.intr_freeq);
imshow(x);
ifftshow(ifft2(final_img),handles);
elseif selectedItem=="Sobel Filter Vertical"
sobelvert_filt=fspecial('sobel');
freeq_filt=fft2(sobelvert_filt);
applied_filt=curr_freequency.*freeq_filt;
trans_freequency=applied_filt;
axes(handles.intr_freeq);
imshow(fftshow(applied_filt));
ifftshow(ifft2(applied_filt),handles);
axes(handles.intr_img);
imshow(new_img);end;
the fftshow function
function ret_val = fftshow(f)
fl=log(1+abs(f));
fm=max(fl(:));
ret_val=im2uint8(fl/fm);
the ifftshow function
function []=ifftshow(f,handles)
fl=abs(f);
fm=max(fl(:));
show_val=fl/fm;
axes(handles.spatial_rep);
imshow(show_val);
Just a thought which i tried implementing maybe my approach to it is wrong, when i fftshift the average values to the center what if i apply a 3x3 filter to the center values of the fftshifted matrix.
i tried doing padarray(sobelfilter,[floor((h/2)-3),floor((w/2)-3)])
however it produces wrong results even if i change the reductions i.e -3 to 0 etc.

comparing filtered data: matlab bandpass function vs filter function

I'm trying to breakdown how bandpass function makes the filtering and stumped upon this line (after the filter is created).
y = signal.internal.filteringfcns.filterData(x,opts);
x is the data and opts has the filter structure.
I've been looking around and haven't been able to find anything about signal.internal.filteringfcns.filterData function. I compared that output with filter(opts.FilterObject,x) and they are not the same.
Next is a minimal working example (data2.txt).
load('data2.txt')
srate=64;
freqrange=[0.4 3.5];
var{1}=freqrange;
var{2}=srate;
m=numel(data2);
x=data2;
R=0.1;%10% of signal
Nr=50;
NR=min(round(m*R),Nr);%At most 50 points
x1=2*x(1)-flipud(x(2:NR+1));%maintain continuity in level and slope
x2=2*x(end)-flipud(x(end-NR:end-1));
x=[x1;x;x2];
opts=signal.internal.filteringfcns.parseAndValidateInputs(x,'bandpass',var);
opts = designFilter(opts);
xx = signal.internal.filteringfcns.filterData(x,opts);
x_fil=xx(NR+1:end-NR);
xx = filter(opts.FilterObject,x);
x_fil2=xx(NR+1:end-NR);
plot([data x_fil x_fil2])
legend('raw','filterData','filter')
Here is the plot:
And here are the psd plot of both filtered signal (filtData first).
So, any help on this ...filtData function or I doing something wrong in my analysis?
Hi again :) If you type edit signal.internal.filteringfcns.filterData, you can even look at what is inside this filterData function. You will see that this function (depending on the options opts) will either,
right zero pad the signal with N/2 zeros and call filter
call filtfilt with the signal
This is also described in the docs of bandpass. So probably this zero padding explains why your output of filter(opts.FilterObject,x) is different.
You cannot find this function described in the documentation of Matlab since it is part of the internal functions of the signal processing toolbox.

how to plot the solution to this PDE?

Maple generates a strange solution form for this PDE. I am having hard time plotting the solution.
The solution is in terms of infinite series. I set the number of terms to say 20, and then set the time to plot the solution at t=2 seconds. Then want to plot the solution now for x=0..1. But the plot comes out empty.
When I sample the solution, and use listplot, I get correct solution plot.
Here is MWE
restart;
pde:=diff(u(x,t),t)=diff(u(x,t),x$2)+x;
bc:=u(0,t)=0,u(1,t)=0;
ic:=u(x,0)=x*(1-x);
sol:=pdsolve({pde,ic,bc},u(x,t)):
sol:=value(sol);
Now set the number of terms to 20 and set t=2
sol2:=subs(t=2,sol):
sol2:=subs(infinity=20,sol2);
The above is what I want to plot.
plot(rhs(sol2),x=0..1);
I get empty plot
So had to manually sample it and use listplot
f:=x->rhs(sol2);
data:=[seq([x,f(x)],x=0..1,.01)]:
plots:-listplot(data);
Solution looks correct, when I compare it to Mathematica's result. But Mathematica result is simpler as it does not have those integrals in the sum.
pde=D[u[x,t],t]==D[u[x,t],{x,2}]+x;
bc={u[0,t]==0,u[1,t]==0};
ic=u[x,0]==x(1-x);
DSolve[{pde,ic,bc},u[x,t],x,t];
%/.K[1]->n;
%/.Infinity->20;
%/.t->2;
And the plot is
Question is: How to plot Maple solution without manually sampling it?
Short answer seems to be that it is a regression in Maple 2017.3.
For me, your code works directly in Maple 2017.2 and Maple 2016.2 (without any unevaluated integrals). I will submit a bug report against the regression.
[edited] Let me know if any of these four ways work for your version (presumably Maple 2017.3).
restart;
pde:=diff(u(x,t),t)=diff(u(x,t),x$2)+x;
bc:=u(0,t)=0,u(1,t)=0;
ic:=u(x,0)=x*(1-x);
sol:=pdsolve({pde,ic,bc},u(x,t)):
sol:=value(sol);
sol5:=value(combine(subs([sum=Sum,t=2,infinity=20],sol))):
plot(rhs(sol5),x=0..1);
sol4:=combine(subs([sum=Sum,t=2,infinity=20],sol)):
(UseHardwareFloats,oldUHF):=false,UseHardwareFloats:
plot(rhs(sol4),x=0..1);
UseHardwareFloats:=oldUHF: # re-instate
sol2:=subs([sum=Sum,int=Int,t=2],sol):
# Switch integration and summation in second summand of rhs(sol).
sol3:=subsop(2=Sum(int(op([2,1,1],rhs(sol2)),op([2,2],rhs(sol2))),
op([2,1,2],rhs(sol2))),rhs(sol2)):
# Rename dummy index and combine summations.
sol3:=Sum(subs(n1=n,op([1,1],sol3))+op([2,1],sol3),
subs(n1=n,op([1,2],sol3))):
# Curtail to first 20 terms.
sol3:=lhs(sol2)=subs(infinity=20,simplify(sol3));
plot(rhs(sol3),x=0..1);
F:=unapply(subs([Sum='add'],rhs(sol3)),x):
plot(F,0..1);
[edited] Here is yet another way, working for me in Maple 2017.3 on 64bit Linux.
It produces the plot quickly, and doesn't involve curtailing any sum at 20 terms. Note that it does not do your earlier step of sol:=value(sol); since it does active int rather than Int before hitting any Sum with value. It also uses an assumption on x corresponding to the plotting range.
restart;
pde:=diff(u(x,t),t)=diff(u(x,t),x$2)+x:
bc:=u(0,t)=0,u(1,t)=0:
ic:=u(x,0)=x*(1-x):
sol:=pdsolve({pde,ic,bc},u(x,t)):
solA:=subs(sum=Sum,value(eval(eval(sol,t=2),Int=int))) assuming x>0, x<1;
plot(rhs(solA),x=0..1) assuming x>0, x<1;

function parameters in matlab wander off after curve fitting

first a little background. I'm a psychology student so my background in coding isn't on par with you guys :-)
My problem is as follow and the most important observation is that curve fitting with 2 different programs gives completly different results for my parameters, altough my graphs stay the same. The main program we have used to fit my longitudinal data is kaleidagraph and this should be seen as kinda the 'golden standard', the program I'm trying to modify is matlab.
I was trying to be smart and wrote some code (a lot at least for me) and the goal of that code was the following:
1. Taking an individual longitudinal datafile
2. curve fitting this data on a non-parametric model using lsqcurvefit
3. obtaining figures and the points where f' and f'' are zero
This all worked well (woohoo :-)) but when I started comparing the function parameters both programs generate there is a huge difference. The kaleidagraph program stays close to it's original starting values. Matlab wanders off and sometimes gets larger by a factor 1000. The graphs stay however more or less the same in both situations and both fit the data well. However it would be lovely if I would know how to make the matlab curve fitting more 'conservative' and more located near it's original starting values.
validFitPersons = true(nbValidPersons,1);
for i=1:nbValidPersons
personalData = data{validPersons(i),3};
personalData = personalData(personalData(:,1)>=minAge,:);
% Fit a specific model for all valid persons
try
opts = optimoptions(#lsqcurvefit, 'Algorithm', 'levenberg-marquardt');
[personalParams,personalRes,personalResidual] = lsqcurvefit(heightModel,initialValues,personalData(:,1),personalData(:,2),[],[],opts);
catch
x=1;
end
Above is a the part of the code i've written to fit the datafiles into a specific model.
Below is an example of a non-parametric model i use with its function parameters.
elseif strcmpi(model,'jpa2')
% y = a.*(1-1/(1+(b_1(t+e))^c_1+(b_2(t+e))^c_2+(b_3(t+e))^c_3))
heightModel = #(params,ages) abs(params(1).*(1-1./(1+(params(2).* (ages+params(8) )).^params(5) +(params(3).* (ages+params(8) )).^params(6) +(params(4) .*(ages+params(8) )).^params(7) )));
modelStrings = {'a','b1','b2','b3','c1','c2','c3','e'};
% Define initial values
if strcmpi('male',gender)
initialValues = [176.76 0.339 0.1199 0.0764 0.42287 2.818 18.52 0.4363];
else
initialValues = [161.92 0.4173 0.1354 0.090 0.540 2.87 14.281 0.3701];
end
I've tried to mimick the curve fitting process in kaleidagraph as good as possible. There I've found they use the levenberg-marquardt algorithm which I've selected. However results still vary and I don't have any more clues about how I can change this.
Some extra adjustments:
The idea for this code was the following:
I'm trying to compare different fitting models (they are designed for this purpose). So what I do is I have 5 models with different parameters and different starting values ( the second part of my code) and next I have the general curve fitting file. Since there are different models it would be interesting if I could put restrictions into how far my starting values could wander off.
Anyone any idea how this could be done?
Anybody willing to help a psychology student?
Cheers
This is a common issue when dealing with non-linear models.
If I were, you, I would try to check if you can remove some parameters from the model in order to simplify it.
If you really want to keep your solution not too far from the initial point, you can use upper bounds and lower bounds for each variable:
x = lsqcurvefit(fun,x0,xdata,ydata,lb,ub)
defines a set of lower and upper bounds on the design variables in x so that the solution is always in the range lb ≤ x ≤ ub.
Cheers
You state:
I'm trying to compare different fitting models (they are designed for
this purpose). So what I do is I have 5 models with different
parameters and different starting values ( the second part of my code)
and next I have the general curve fitting file.
You will presumably compare the statistics from fits with different models, to see whether reductions in the fitting error are unlikely to be due to chance. You may want to rely on that comparison to pick the model that not only fits your data suitably but is also simplest (which is often referred to as the principle of parsimony).
The problem is really with the model you have shown resulting in correlated parameters and therefore overfitting, as mentioned by #David. Again, this should be resolved when you compare different models and find that some do just as well (statistically speaking) even though they involve fewer parameters.
edit
To drive the point home regarding the problem with the choice of model, here are (1) results of a trial fit using simulated data (2) the correlation matrix of the parameters in graphical form:
Note that absolute values of the correlation close to 1 indicate strongly correlated parameters, which is highly undesirable. Note also that the trend in the data is practically linear over a long portion of the dataset, which implies that 2 parameters might suffice over that stretch, so using 8 parameters to describe it seems like overkill.

Suppress kinks in a plot matlab

I have a csv file which contains data like below:[1st row is header]
Element,State,Time
Water,Solid,1
Water,Solid,2
Water,Solid,3
Water,Solid,4
Water,Solid,5
Water,Solid,2
Water,Solid,3
Water,Solid,4
Water,Solid,5
Water,Solid,6
Water,Solid,7
Water,Solid,8
Water,Solid,7
Water,Solid,6
Water,Solid,5
Water,Solid,4
Water,Solid,3
The similar pattern is repeated for State: "Solid" replaced with Liquid and Gas.
And moreover the Element "Water" can be replaced by some other element too.
Time as Integer's are in seconds (to simplify) but can be any real number.
Additionally there might by some comment line starting with # in between the file.
Problem Statement: I want to eliminate the first dip in Time values and smooth out using some quadratic or cubic or polynomial interpolation [please notice the first change from 5->2 --->8. I want to replace these numbers to intermediate values giving a gradual/smooth increase from 5--->8].
And I wish this to be done for all the combinations of Elements and States.
Is this possible through some sort of coding in Matlab etc ?
Any Pointers will be helpful !!
Thanks in advance :)
You can use the interp1 function for 1D-interpolation. The syntax is
yi = interp1(x,y,xi,method)
where x are your original coordinates, y are your original values, xi are the coordinates at which you want the values to be interpolated at and yi are the interpolated values. method can be 'spline' (cubic spline interpolation), 'pchip' (piece-wise Hermite), 'cubic' (cubic polynomial) and others (see the documentation for details).
You have alot of options here, it really depends on the nature of your data, but I would start of with a simple moving average (MA) filter (which replaces each data point with the average of the neighboring data points), and see were that takes me. It's easy to implement, and fine-tuning the MA-span a couple of times on some sample data is usually enough.
http://www.mathworks.se/help/curvefit/smoothing-data.html
I would not try to fit a polynomial to the entire data set unless I really needed to compress it, (but to do so you can use the polyfit function).