Dyadic daily diary analysis - multi-level

I have this daily diary dataset measured among couples. So it has two levels: individual-level and couple-level.
X was measured in the evening from Monday to Friday.
M was measured in the morning from Tuesday to Saturday.
Y was measured in the evening from Monday to Friday.
Therefore, all variables were assessed 5 times but X and Y assessed at the same time.
My bosses wanted me to build this mediation model: X (a previous evening assessment)--> M (the next morning assessment)-->Y (the next evening assessment).
What I came up with is a cross-lagged panel mediation (see figure) which means (if my understanding is right) for X, we can only use the first 4 days'data and for M and Y, we can only use the last four days' data because X and Y were assessed at the same time. My boss felt there should be a way to use data of all five days for each variable.
Are there other ways of analyzing this utilizing data of all five days for each variable? Or what are the proper ways of testing this mediation model? Thanks so much!
(The Figure was copied and pasted from Selig & Preacher, 2009)
()

Related

Matlab average number of customers during a single day

I'm having problems creating a graph of the average number of people inside a 24h shopping complex. I have two columns of data on a spreadsheet of the times a customer comes in (intime) and when he leaves (outtime). The data spans a couple of years and is in datetime format (dd-mm-yyyy hh:mm:ss).
I want to make a graph of the data with time of day as x-axis, and average number of people as y-axis. So the graph would display the average number of people inside during the day.
Problems arise because the place is open 24h and the timespan of data is years. Also customer intime & outtime might be on different days.
Example:
intime 2.1.2017 21:50
outtime 3.1.2017 8:31
Any idea how to display the data easily using Matlab?
Been on this for multiple hours without any progress...
Seems like you need to decide what defines a customer being in the shop during the day, is 1 min enough? is there a minimum time length under which you don't want to count it as a visit?
In the former case you shouldn't be concerned with the hours at all, and just count it as 1 entry if the entry and exit are in the same day or as 2 different entries if not.
It's been a couple of years since I coded actively in matlab and I don't have a handy IDE but if you add the code you got so far, I can fix it for you.
I think you need to start by just plotting the raw count of people in the complex at the given times. Once that is visualized it may help you determine how you want to define "average people per day" and how to go about calculating it. Does that mean average at a given time or total "ins" per day? Ex. 100 people enter the complex in a day ... but on average there are only 5 in the complex at a given time. Which stat is more important? Maybe you want both.
Here is an example of how to get the raw plot of # of people at any given time. I simulated your in & out time with random numbers.
inTime = cumsum(rand(100,1)); %They show up randomly
outTime = inTime + rand(100,1) + 0.25; % Stay for 0.25 to 1.25 hrs
inCount = ones(size(inTime)); %Add one for each entry
outCount = ones(size(outTime))*-1; %Subtract one for each exit.
allTime = [inTime; outTime]; %Stick them together.
allCount = [inCount; outCount];
[allTime, idx] = sort(allTime);%Sort the timestamps
allCount = allCount(idx); %Sort counts by the timestamps
allCount = cumsum(allCount); %total at any given time.
plot(allTime,allCount);%total at any given time.
Note that the x-values are not uniformly spaced.
IF you decide are more interested in total customers per day then you could just find the intTimes with in a given time range (each day) & probably just ignore the outTimes all together.

Dymola / Modelica - District heating

I am trying to validate a district heating model I built using Dymola.
In this case, I am trying to find the mass flow during a year period. I have two models running. both with the same loads and pipes with same characteristics as this picture:
pipes
Both models are as follows:
models
My results are making sense at least regarding the time of the year my flow should be higher, I am getting very high values during January, February and March, then again by the end of the year.
However those high peaks are VERY different, the first model on the picture is giving me peaks of almost 400kg/s whereas the second one is reaching up to 70kg/s.
Can anyone suggest a way to validate the model? I have the heat loads for the year hour by hour (this is the input I am giving to Dymola), I know that the min temperature of the water is 70 and the max is 85 celsius.
But I am really struggling to validate my model. Any suggestions?

Find and Rank Time Series MATLAB

I know there must be a simple way that I can learn to do this but I cannot imagine how to start. I am tasked with finding a top 10 matching daily wind power time series in a 30-day plus/minus window from the first day in the time series (Jan 1st) matching a single daily wind power time series and it is out of my level of experience in MATLAB. I have successfully done this matching a single time series of the current year with the exact calendar days from previous years, but I need a more robust searching method to find the best correlated time series in a +/- window of time. For example, I'm comparing a 120 day time series (without leap years) with 25 previous years during the same 120-day period (Jan-Apr). The end result will show me the top 10 time series with the years and Julian day or cumulative day listed and a correlation or RMSE value associated with it. My data looks like this arranged in a 365 (days) X 25 (years) array and I thank you very much for your help!
1182573 470528 1638232 2105034 1070466 478257 1096999
879997 715531 1111498 1004556 1894202 1372178 1707984
636173 937769 2119436 742710 1625931 1275567 1228515
967360 1103082 2218855 1643898 1822868 554769 1325642

Tableau: Four week moving average, first four weeks

Setting
As I'm sure many of you do in your vizs, I use date parameters for my data. This is great for creating trend analyses and all types of time series representations. Currently I'm using a line graph to show our sales hit rate history.
Picture
Question
The problem I'm running into is in creating a four week moving average. As you can see the four week moving average doesn't become just that until four weeks in! This creates quite the problem for me. What methods will enable the average at t=0 to show the average for the preceding four weeks?
Formula Used
This is my formula for creating the four week moving average:
WINDOW_AVG([Hit Ratio],-27,0)
Remove your date filter and try:
IIF(ATTR([DATE_FIELD])<T=0,NULL,WINDOW_AVG([Hit Ratio],-27,0))

significant differences between means

Considering the picture below
each values X could be identified by the indeces X_g_s_d_h
g = group g=[1:5]
s = subject number (variable for each g)
d = day number (variable for each s)
h = hour h=[1:24]
so X_1_3_4_12 means that the value X is referred to the
12th hour
of 4th day
of 3rd subject
of group 1
First I calculate the mean (hour by hour) over all the days of each subject. Doing that the index d disappear and each subject is represented by a vector containing 24 values.
X_g_s_h will be the mean over the days of a subject.
Then I calculate the mean (subject by subject) of all the subjects belonging to the same group resulting in X_g_h. Each group is represented by 1 vector of 24 values
Then I calculate the mean over the hours for each group resulting in X_g. Each group now is represented by 1 single value
I would like to see if the means X_g are significantly different between the groups.
Can you tell me what is the proper way?
ps
The number of subjects per group is different and it is also different the number of days for each subject. I have more than 2 groups
Thanks
Ok so I am posting an answer to summarize some of the problems you may have.
Same subjects in both groups
Not averaging:
1-First if we assume that you have only one measure that is repeated every hour for a certain amount of days, that is independent on which day you pick and each hour, then you can reshape your matrix into one column for each subject, per group and perform a ttest with repetitive measures.
2-If you cannot assume that your measure is independent on the hour, but is in day (lets say the concentration of a drug after administration that completely vanish before your next day measure), then you can make a ttest with repetitive measures for each hour (N hours), having a total of N tests.
3-If you cannot assume that your measure is independent on the day, but is in hour (lets say a measure for menstrual cycle, which we will assume stable at each day but vary between days), then you can make a ttest with repetitive measures for each day (M days), having a total of M tests.
4-If you cannot assume that your measure is independent on the day and hour, then you can make a ttest with repetitive measures for each day and hour, having a total of NXM tests.
Averaging:
In the cases where you cannot assume independence you can average the dependent variables, therefore removing the variance but also lowering you statistical power and interpretation.
In case 2, you can average the hours to have a mean concentration and perform a ttest with repetitive measures, therefore having only 1 test. Here you lost the information how it changed from hour 1 to N, and just tested whether the mean concentration between groups within the tested hours is different.
In case 3, you can average both hour and day, and test if for example the mean estrogen is higher in one group than in another, therefore having only 1 test. Again you lost information how it changed between the different days.
In case 4, you can average both hour and day, therefore having only 1 test. Again you lost information how it changed between the different hours and days.
NOT same subjects in both groups
Paired tests are not possible. Follow the same ideology but perform an unpaired test.
You need to perform a statistical test for the null hypothesis H0 that the data in different groups comes from independent random samples from distributions with equal means. It's better to avoid sequential 'mean' operation, but just to regroup data on g. If you assume normality and independence of observations (as pointed out by #ASantosRibeiro below), that you can perform ttest (http://www.mathworks.nl/help/stats/ttest2.html)
clear all;
X = randn(6,5,4,3); %dummy data in g_s_d_h format
Y = reshape(X,5*4*3,6); %reshape data per group
h = zeros(6,6);
for i = 1 : 6
for j = 1 : 6
h(i,j)=ttest2(Y(:,i),Y(:,j));
end
end
If you want to take into account the different weights of the observations, you need to calculate t-value yourself (e.g., see here http://support.sas.com/documentation/cdl/en/statug/63033/HTML/default/viewer.htm#statug_ttest_a0000000126.htm)