I have a set of data which I have to interpolate. My first data set contains z1[:] values for every x1[:] and y1[:]. I have to interpolate my second data set x2[:], y2[:] w.r.t to my first set to get z2[:] values. The size of my first data set is different from second data set. Is there any algorithm already written in Modelica for this functionality?
Interpolation on irregular grids was already requested by https://trac.modelica.org/Modelica/ticket/1153#comment:15 but still is not part of the Modelica Standard Library (as of version 3.2.2 of today).
Related
I have a snapshot of a 3D field whose domain is a cube. I need to visualize the vorticity associated with this field. I am following the approach described by this video in which the vorticity gets calculated by ParaView.
I followed the procedure but, inside the filter Compute derivatives / Coloring, I cannot find the vorticity but only the components of the starting field as you can see from the following picture:
I read that another method is to use the filter for unstructured data but I don't have such a filter.
How should I properly visualize the vorticity?
I am using ParaView 5.10.
In your screenshot, the value for Vectors property shows a (?), meaning that it is not a valid input. You should select an existing vector arrays here.
I have some repeated measures data structured similar to the example Ovary data provided in the GAMLSS package. I would like to utilize the random effects syntax in my GAMLSS model as well as a smoother term for the fixed effect of time. I am having trouble extracting and plotting the fixed effect component only.
Consider the following model.
libary(gamlss)
library(ggplot2)
data(Ovary)
m1 <- gamlss(follicles~pb(Time) + re(random=~1+Time|Mare), data=Ovary)
The following extracts the fitted values for the model.
Ovary$fit<-fitted(m1)
I am interested in extracting and plotting the fixed effect component, however I am having some difficulty identifying how to do this exactly. The following calculates the partial effect for time.
pef <- getPEF(obj=m1,term="Time")
time<-seq(0,1,by=.05)
fit<-pef(time)
mydata<-data.frame(time=time,fit=fit)
And this is the plot of the raw data, fitted values, and partial effect.
ggplot(data=Ovary,aes(x=Time,y=follicles,color=Mare,group=Mare)) + geom_line() + geom_point() + geom_line(data=Ovary, inherit.aes=FALSE,aes(x=Time,y=fit,group=Mare)) + geom_point(color="black") + geom_line(inherit.aes=FALSE,data=mydata,aes(x=time,y=fit),color="red",size=3)
The thick red line, which represents the partial effect for time, does not seem quite right to me based on the underlying fitted values.
So my questions are:
Could someone direct me to identifying how to calculate fitted values for the fixed effect component only in GAMLSS, if this achievable. This would be more straightforward without the smooth for the fixed effect, but my real data has a nonlinear relation with time that isn't fit well by, for example, having the fixed component be a polynomial.
I would like to do this using GAMLSS because the dependent variable I am actually using has a beta distribution (which GAMLSS implements) with a decreasing variance over time (which GAMLSS allows me to model explicitly) and for another component of this project I am using this same dataset to calculate centiles (which GAMLSS can do, sans the random effect), but I am open to alternative suggestions.
I have a 115*8000 data where 115 is the number of features. When I use pca function of matlab like this
[coeff,score,latent,tsquared,explained,mu] = pca(data);
on my data. I get some values. I read on here that how can I reduce my data but one thing confuses me. The explained data shows how much a feature weighs on calculation but do features get reorganized in this proces or features are exactly in same order as I give it to function?
Also I give 115 features but explained shows 114. Why does it happen?
The data is not "reorganized" in PCA, is transformed to a new space. When you crop the PCA space, that is your data, but you are not going to be able to visualize/understand it there, you need to convert it back to "normal" space, using eigenvectors and such.
explained gives you 114 because you now what is the answer with 115! 100% of the data can be explained with the whole data!
Read about it further in this answer: Significance of 99% of variance covered by the first component in PCA
PCA does not "choose" some of your features and remove the rest.
So you should not still be thinking about the original features after running PCA.
It is well-explained here on Wikipedia. You are converting your samples from the space defined by your original features to a space where features are linearly uncorrelated and called "principal components". Note: these components are no longer the original features.
An example of this in 2D could be: you have a vector z=(2,3) defined in your Euclidean space. It needs 2 features (the x and the y). If we change the space and define it using the coordinate vectors v=(2,3) and w an orthogonal vector to v, then z=(1,0) i.e. z=1.v+0.w and can now be represented with only 1 feature (the first coordinate!).
The link that you shared explains exactly (in the selected answer) how you can go about using the outputs of the pca function to reduce your dimensionality.
(As noted by Ander you do not care about the last components since these are the weakest anyway and you want to drop them)
Hello I have spectral data collected over time. I want to store the outliers and there index so that the user can see where the outliers are. I have searched on how to find outliers and can't seem to find a solution to my problem.
An outlier can be defined as 1.5 times the standard deviation since this is what I've mostly seen.
data = rand(1024,20) %spectral data over time
If you can upgrade, you can check out the new isoutlier and filloutliers functions in R2017a. Searching for outliers more than 1.5x the standard deviation would correspond to using the 'mean' method for finding the outliers, and specifying the 'ThresholdFactor' name-value pair to a value of 1.5. If you want a windowed approach, you can instead use the 'movmean' method and specify a window size.
I have a set of reference data points to which I want to fit a sigmoidal curve. I can use the curve fitting tool of MATLAB to do this but I have a custom equation to fit to the data. The equation has 4-5 variables which I want to vary and then test for the goodness of fit.
I tried using the goodnessOfFitfunction for this. But it requires the test data and reference data matrices to be of the same size. The numbers of reference data points that I have are few (15-20) and the number of test points generated by using the custom equation is large.
Is there any other way by which I can check the goodness of fit of the curve? Or do I have find the test data points corresponding to the points in the reference data and then use the goodnessOfFit function (One problem with this approach is that I don't have the same resolution for the x axis in the test and reference data e.g. for a x-point 1.2368 in ref. data I have either 1.23 and 1.24 in my test data. I will have to round off the data and then calculate the fit).
do I have find the test data points corresponding to the points in the reference data and then use the goodnessOfFit function. I will have to round off the data and then calculate the fit).
Yes, buddy..! Seems like you have to do it in the hard way! :/
But instead of simply rounding off, you can find the two points in the test data just before and after the corresponding reference sample point. Then use linear interpolation to guess the value corresponding to the reference point.
Or easier, there is a resamplefunction in Matlab which would resample your test data to match your reference data. This would work if the reference data have a constant sample interval.
All the best!