Rolling weighted average that resets for each new value in another column - group-by

I want to calculate a rolling weighted average for each value in another column. For instance, if I have two teams and I want to calculate the rolling weighted average for points per game (PPG), with more weight on recent games how would I code for this?
For instance, if I have the data frame:
TEAM
PPG
CHI
90
CHI
100
CHI
110
CHI
120
DET
100
DET
110
DET
120
DET
130
I would want to make a new column (Weighted average, W_AV) with a rolling, weighted average (in this case, the past 3 games) for PPG that has more weight on recent games and resets when the 'TEAM' column changes. It should look something like this:
TEAM
PPG
W_AV
CHI
90
Nan
CHI
100
Nan
CHI
110
102
CHI
120
112
DET
100
Nan
DET
110
Nan
DET
120
112
DET
130
122
I tried this:
df['W_AV'] = df.groupby(['Team']).PPG.rolling(3).mean().values
but this gives me a non-weighted average for each team.
I also tried this:
df["W_AV"] = df["PPG"].ewm(span=2).mean()
but this doesn't reset for each team although the average is weighted.
If anyone can help with this I'd really appreciate it!

Related

Are there any instances where a negative time type could give unexpected results?

Are there any instances where a negative time type could give unexpected results if used for specific purposes? When time deltas are calculated between negative time values and non-negative time values for example, there do not appear to be any issues.
time
val
00:00:31.384
-0.3170017
00:06:00.139
0.9033492
00:07:01.099
-0.7661049
Then, for the purpose of a window join later over a 10-min window
win:00:10:00;
winForJoin: (neg win;00:00:00) +\:(exec time from data);
first[winForJoin] gives -00:09:28.616 -00:03:59.861 -00:02:58.901
winForJoin[1]-winForJoin[0] gives 10 minutes as expected
If I understand correctly, you're asking how would a window join behave if the opening interval was a negative time? (due to the interval subtraction taking the values into negative territory, relative to 00:00).
The simple answer is that it won't behave any differently than if the times were numbers, but in practice you may see results you don't expect depending on how your table is set up and what you're trying to achieve.
Taking the example in the official wiki as a starting point: https://code.kx.com/q/ref/wj/
q)t:([]sym:3#`ibm;time:10:01:01 10:01:04 10:01:08;price:100 101 105)
q)a:101 103 103 104 104 107 108 107 108
q)b:98 99 102 103 103 104 106 106 107
q)q:([]sym:`ibm; time:10:01:01+til 9; ask:a; bid:b)
q)f:`sym`time
q)w:-2 1+\:t.time
/add volume too so it's easier to follow:
q)s:908 360 522 257 858 585 90 683 90;
q)update size:s from `q
/add an alternative range which has negative starting time
q)w2:(-11:00;1)+\:t.time
The window join takes all rows in q whose times are between the pairs of time ranges:
q)q[`time]within/:flip w
110000000b
011110000b
000001111b
Under the covers it's asking: are these positive numbers (the quote times) in between those two positive numbers (the window range). There's no reason it can't also ask: are these positive numbers in between this negative number and this positive number
q)q[`time]within/:flip w2
110000000b
111110000b
111111111b
You'll notice that all of them are greater than the negative time - meaning that it will include all rows from the beginning of the q table, up until the end time of that pair. This can be considered expected behaviour - if your start time is negative you must mean "from the beginning of time" - aka, all rows from the beginning of the table.
Comparing sum of size shows how the results differ:
q)wj[w;f;t;(q;(sum;`size))]
sym time price size
-----------------------
ibm 10:01:01 100 1268
ibm 10:01:04 101 1997
ibm 10:01:08 105 1448
q)wj[w2;f;t;(q;(sum;`size))]
sym time price size
-----------------------
ibm 10:01:01 100 1268
ibm 10:01:04 101 2905
ibm 10:01:08 105 4353
Finally - where it might get complicated.....it depends on what "negative" time means in your table. If you're at 00:00 (midnight) and you subtract 10 minutes, are you trying to access data from 23:50 the day before? Or does 00:00 represent the starting time (row zero) of your table? If you're trying to access 23:50 from the day before then you will have problems because 23:50 is NOT in between your negative start time and your positive end time, e.g:
q)23:50 within(-00:58:59;10:01:02)
0b
Again this all depends on how your data looks and what you're trying to do

Mixed-effects linear regression model using multiple independent measurements

I am trying to implement a linear mixed effect (LME) regression model for an x-ray imaging quality metric "CNR" (contrast-to-noise ratio) for which I measured for various tube potentials (kV) and filtration materials (Filter). CNR was measured for 3 consecutive slices so I have a standard deviation of the CNR from these independent measurements as well. I am wondering how I can incorporate these multiple independent measurements in my analysis. A representation of the data for a single measurement and my first attempt using fitlme is shown below. I tried looking at online resources but could not find an answer to my specific questions.
kV=[80 90 100 80 90 100 80 90 100]';
Filter={'Al','Al','Al','Cu','Cu','Cu','Ti','Ti','Ti'}';
CNR=[10 9 8 10.1 8.9 7.9 7 6 5]';
T=table(kV,Filter,CNR);
kV Filter CNR
___ ______ ___
80 'Al' 10
90 'Al' 9
100 'Al' 8
80 'Cu' 10.1
90 'Cu' 8.9
100 'Cu' 7.9
80 'Ti' 7
90 'Ti' 6
100 'Ti' 5
OUTPUT
Linear mixed-effects model fit by ML
Model information:
Number of observations 9
Fixed effects coefficients 4
Random effects coefficients 0
Covariance parameters 1
Formula:
CNR ~ 1 + kV + Filter
Model fit statistics:
AIC BIC LogLikelihood Deviance
-19.442 -18.456 14.721 -29.442
Fixed effects coefficients (95% CIs):
Name Estimate SE pValue
'(Intercept)' 18.3 0.17533 1.5308e-09
'kV' -0.10333 0.0019245 4.2372e-08
'Filter_Cu' -0.033333 0.03849 -0.86603
'Filter_Ti' -3 0.03849 -77.942
Random effects covariance parameters (95% CIs):
Group: Error
Name Estimate Lower Upper
'Res Std' 0.04714 0.0297 0.074821
Questions/Issues with current implementation:
How is the fixed effects coefficients for '(Intercept)' with P=1.53E-9 interpreted?
I only included fixed effects. Should the standard deviation of the ROI measurements somehow be incorporated into the random effects as well?
How do I incorporate the three independent measurements of CNR for three consecutive slices for a give kV/filter combination? Should I just add more rows to the table "T"? This would result in a total of 27 observations.

Clustering matrix distance between 3 time series

I have a question about the application of clustering techniques more concretely the K-means.
I have a data frame with 3 sensors (A,B,C):
time A | B | C |
8:00:00 6 10 11
8:30:00 11 17 20
9:00:00 22 22 15
9:30:00 20 22 21
10:00:00 17 26 26
10:30:00 16 45 29
11:00:00 19 43 22
11:30:00 20 32 22
... ... ... ...
And I want to group sensors that have the same behavior.
My question is: Looking at the dataframe above, I must calculate the correlation of each object of the data frame and then apply the Euclidean distance on this correlation matrix, thus obtaining a 3 * 3 matrix with the value of distances?
Or do I transpose my data frame and then compute the dist () matrix with Euclidean metric only and then I will have a 3 * 3 matrix with the distances value.
You have just three sensors. That means, you'll need three values, d(A B), d(B,C) and d(A B). Any "clustering" here does not seem to make sense to me? Certainly not k-means. K-means is for points (!) In R^d for small d.
Choose any form of time series similarity that you like. Could be simply correlation, but also DTW and the like.
Q1: No. Why: The correlation is not needed here.
Q2: No. Why: I'd calculate the distances differently
For the first row, R' built-in s dist() function (which uses Euclidean distance by default)
dist(c(6, 10, 11))
gives you the intervals between each value
1 2
------
2| 4
3| 5 1
item 2 and 3 are closest to each other. That's simple.
But there is no single way to calculate the distance between a point and a group of points. There you need a linkage function (min/max/average/...)
What I would do using R's built-in kmeans() function:
Ignore the date column,
(assuming there are no NA values in any A,B,C columns)
scale the data if necessary (here they all seem to have same order of magnitude)
perform KMeans analysis on the A,B,C columns, with k = 1...n ; evaluate results
perform a final KMeans with your suitable choice of k
get the cluster assignments for each row
put them in a new column to the right of C

Writing (and using) principal component analysis in matlab

I (hope to) obtain a matrix with data on different characteristics on rat calls (in the ultrasound). Variables include starting frequency, ending frequency, duration etc etc. The observations will include all the rat calls in my audio recording.
I want to use PCA to analyze my data, hopefully decorrelating any principal components that are not important to the structure of these calls and how they work, allowing me to group the calls up.
My problem is that while I have a basic understanding of how PCA works, I don't have an understanding of the finer points including how to implement this in Matlab.
I know you should standardise my data. All methods I have seen involve means adjusting by subtracting the mean. However some others also divide by the standard deviation or divide the transpose of the means adjusted data by the square root of N-1 (N being the number of variables).
I know with the standardised data, you can then find the covariance matrix, and extract the eigen values and vectors such as with using eig(cov(...)). some others use svd(...) instead. I still don't understand what this is and why it is important
I know there are different ways to implement PCA, but I don't like how I get different results for all of them.
There is even a pca(...) command also.
While reconstructing the data, some people multiply the means adjust data with the principal component data, others do the same but with the transpose of the principal component data
I just want to be able to analyse my data by plotting graphs of the principal components, and of the data (with the most insignificant principal components removed). I want to know about the variances of these eigen vectors and how much they represent the total variance of the data. I want to be able to fully exploit all the information PCA can allow me to get out
can anyone help?
=========================================================
This code seems to work based on pg 20 of http://people.maths.ox.ac.uk/richardsonm/SignalProcPCA.pdf
X = [105 103 103 66; 245 227 242 267; 685 803 750 586;...
147 160 122 93; 193 235 184 209; 156 175 147 139;...
720 874 566 1033; 253 265 171 143; 488 570 418 355;...
198 203 220 187; 360 365 337 334; 1102 1137 957 674;...
1472 1582 1462 1494; 57 73 53 47; 1374 1256 1572 1506;...
375 475 458 135; 54 64 62 41];
[M,N] = size(X);
mn = mean(X,2);
data = X - repmat(mn,1,N);
Y = data' / sqrt(N-1);
[~,S,PC] = svd(Y);
S = diag(S);
V = S .* S;
signals = PC' * data;
%plotting single PC1 on its own
figure;
plot(signals(1,:),zeros(1,size(signals,2)),'b.','markersize',15)
xlabel('PC1')
title('plotting single PC1 on its own')
%plotting PC1 against PC2
figure;
plot(signals(1,:),signals(2,:),'b.','markersize',15)
xlabel('PC1'),ylabel('PC2')
title('plotting PC1 against PC2')
figure;
plot(PC(:,1),PC(:,2),'m.','markersize',15)
xlabel('effect(PC1)'),ylabel('effect(PC2)')
but where is the standard deviation? how is the result different to
B=zscore(X);
[PC, D] = eig(cov(B));
D = diag(D);
cumsum(flipud(D)) / sum(D)
PC*B %(note how this says PC whereas above it says PC')
If the principal components are represented as columns, then I can remove the most insignificant eigen vectors by finding the smallest eigenvalue and setting its corresponding eigen vector column to a column of zeros.
How can either of these methods above be applied by using the pca(...) command and achieve THE SAME result? can anyone help explain this to me (and ideally show me how all of these can achieve the same results)?

How to compute & plot Equal Error Rate (EER) from FAR/FRR values using matlab

I have the following values against FAR/FRR. i want to compute EER rates and then plot in matlab.
FAR FRR
19.64 20
21.29 18.61
24.92 17.08
19.14 20.28
17.99 21.39
16.83 23.47
15.35 26.39
13.20 29.17
7.92 42.92
3.96 60.56
1.82 84.31
1.65 98.33
26.07 16.39
29.04 13.13
34.49 9.31
40.76 6.81
50.33 5.42
66.83 1.67
82.51 0.28
Is there any matlab function available to do this. can somebody explain this to me. Thanks.
Let me try to answer your question
1) For your data EER can be the mean/max/min of [19.64,20]
1.1) The idea of EER is try to measure the system performance against another system (the lower the better) by finding the equal(if not equal then at least nearly equal or have the min distance) between False Alarm Rate (FAR) and False Reject Rate (FRR, or missing rate) .
Refer to your data, [19.64,20] gives min distance, thus it could used as EER, you can take mean/max/min value of these two value, however since it means to compare between systems, thus make sure other system use the same method(mean/max/min) to pick EER value.
The difference among mean/max/min can be ignored if the there are large amount of data. In some speaker verification task, there will be 100k data sample.
2) To understand EER ,better compute it by yourself, here is how:
two things you need to know:
A) The system score for each test case (trial)
B) The true/false for each trial
After you have A and B, then you can create [trial, score,true/false] pairs then sort it by the score value, after that loop through the score, eg from min-> max. At each loop assume threshold is that score and compute the FAR,FRR. After loop through the score find the FAR,FRR with "equal" value.
For the code you can refer to my pyeer.py , in function processDataTable2
https://github.com/StevenLOL/Research_speech_speaker_verification_nist_sre2010/blob/master/SRE2010/sid/pyeer.py
This function is written for the NIST SRE 2010 evaluation.
4) There are other measures similar to EER, such as minDCF which only play with the weights of FAR and FRR. You can refer to "Performance Measure" of http://www.nist.gov/itl/iad/mig/sre10results.cfm
5) You can also refer to this package https://sites.google.com/site/bosaristoolkit/ and DETware_v2.1.tar.gz at http://www.itl.nist.gov/iad/mig/tools/ for computing and plotting EER in Matlab
Plotting in DETWare_v2.1
Pmiss=1:50;Pfa=50:-1:1;
Plot_DET(Pmiss/100.0,Pfa/100.0,'r')
FAR(t) and FRR(t) are parameterized by threshold, t. They are cumulative distributions, so they should be monotonic in t. Your data is not shown to be monotonic, so if it is indeed FAR and FRR, then the measurements were not made in order. But for the sake of clarity, we can order:
FAR FRR
1 1.65 98.33
2 1.82 84.31
3 3.96 60.56
4 7.92 42.92
5 13.2 29.17
6 15.35 26.39
7 16.83 23.47
8 17.99 21.39
9 19.14 20.28
10 19.64 20
11 21.29 18.61
12 24.92 17.08
13 26.07 16.39
14 29.04 13.13
15 34.49 9.31
16 40.76 6.81
17 50.33 5.42
18 66.83 1.67
19 82.51 0.28
This is for increasing FAR, which assumes a distance score; if you have a similarity score, then FAR would be sorted in decreasing order.
Loop over FAR until it is larger than FRR, which occurs at row 11. Then interpolate the cross over value between rows 10 and 11. This is your equal error rate.