What's the format of tm->when in /proc/net/tcp? - sockets

I need to know what tm->when means, but proc(5) doesn't mention anything helpful,
So, does it store the creation time of the socket? The number seems to be decreasing each time I view the file.
root#ubuntu-vm:~# cat /proc/net/tcp
sl local_address rem_address st tx_queue rx_queue tr tm->when retrnsmt uid timeout inode
0: 00000000:0CEA 00000000:0000 0A 00000000:00000000 00:00000000 00000000 104 0 17410 1 dddb6d00 100 0 0 10 -1
1: 00000000:0016 00000000:0000 0A 00000000:00000000 00:00000000 00000000 0 0 7959 1 dddb4500 100 0 0 10 -1
2: B238A8C0:0016 0138A8C0:9C96 01 00000000:00000000 02:00061444 00000000 0 0 8243 4 daa3c000 20 4 27 10 16
3: B238A8C0:0CEA 0138A8C0:8753 01 00000000:00000000 02:0009C787 00000000 104 0 19467 2 daa3e300 20 4 18 10 -1

From Exploring the /proc/net/ Directory
The tr field indicates whether a timer is active for this socket. A value of zero indicates the timer is not active. The tm->when field indicates the time remaining (in jiffies) before timeout occurs.

Related

Find a boolean expression from a truth table (several bits)

I have the following truth table (a and b being my inputs and r the result):
a
b
r
00
00
00
00
01
01
00
11
01
01
00
01
01
01
01
01
11
01
11
00
01
11
01
01
11
11
11
The issue is that I can't find the boolean expression to express this truth table.
Another similar thread pointed out that karnaugh maps could solve it, but I can't find any implementation working with several bits inputs.
Note that to my model, the second bit doesn't matter is the first one is set for a specific input, thus if it facilitates the boolean expression, I can force it to 0, to 1, or not even force it.
Truth Table (given):
a0 a1 b0 b1 r0 r1
0 0 0 0 0 0
0 0 0 1 0 1
0 0 1 1 0 1
0 1 0 0 0 1
0 1 0 1 0 1
0 1 1 1 0 1
1 1 0 0 0 1
1 1 0 1 0 1
1 1 1 1 1 1
Kmaps:
r0:
\a0a1
b0b1 \ | 00| 01 11 10
00 | 0 | 1 1 1
---
01 1 1 1 1
11 1 1 1 1
---
10 | x | x x x
| |
r1:
\a0a1
b0b1 \ 00 01 11 10
00 0 0 0 0
01 0 0 0 0
---
11 0 0 | 1 | 0
| |
10 x x | x | x
---
Boolean Expressions:
r0 = a0 + a1 + b1
r1 = a0a1b0

How to average data per week?

I hope someone could help me. I am starting to use R.
1st of all I would like to know if it is possible to determine the week of the year with the day my data was collected using R. I made this manually, but takes long time and increases the chance of my making a mistake...
I also am interested in getting the average of each week. For example, I have 2 data points in week 21.
An example of my data:
enter image description here
Week Date Class 1 g/plant Total g/plant 10 berry weigh Brix
21 26/05/2022 34.53571429 34.53571429 25.7 11.55
21 28/05/2022 35.39285714 39.25 27.1 10.98
22 31/05/2022 41.17857143 41.17857143 22.8 11.8
22 03/06/2022 57.60714286 57.60714286 22.2 10.91
23 06/06/2022 23.67857143 23.67857143 26.4 12.3
23 09/06/2022 23.60714286 24.14285714 24.7 12.63
24 14/06/2022 18.82142857 19.78571429 26.4 12.8
24 18/06/2022 20.78571429 20.78571429 30 12.05
25 21/06/2022 3.178571429 3.25 22.2 10.3
25 23/06/2022 0 0 0 0
25 25/06/2022 0 0 0 0
26 28/06/2022 0 0 0 0
26 01/07/2022 0 0 0 0
27 05/07/2022 0 0 0 0
27 09/07/2022 0 0 0 0
28 12/07/2022 0 0 0 0
28 14/07/2022 0 0 0 0
28 16/07/2022 0 0 0 0
30 26/07/2022 50.89285714 50.89285714 27.6 9.85
30 29/07/2022 19.39285714 19.39285714 19.1 10.58
31 02/08/2022 68.57142857 68.57142857 25 8.91
31 06/08/2022 58.75 58.75 24.9 8.81
32 09/08/2022 46.57142857 46.57142857 17.7 8.92
32 11/08/2022 24.25 24.25 17.2 9.77
32 13/08/2022 32.14285714 32.14285714 16 20.41
33 16/08/2022 53.14285714 53.14285714 19.7 10.09
33 20/08/2022 57.96428571 59.25 17.8 9.49
34 25/08/2022 28.10714286 28.10714286 18 9.99
35 30/08/2022 81.03571429 81.60714286 19.6 10.89
35 02/09/2022 22.53571429 22.53571429 14.8 10.04
36 06/09/2022 36.53571429 38.96428571 17.9 11.18
36 09/09/2022 24.5 25.71428571 17.3 10.48
37 16/09/2022 57.35714286 60.96428571 21.2 12.21
38 21/09/2022 5.142857143 7.142857143 13.5 11.58
39 30/09/2022 29.9047619 31.76190476 16.4 15.49
40 07/10/2022 22.9047619 24.47619048 16.4 15.12
41 12/10/2022 14.61904762 14.85714286 12.5 14.14
42 19/10/2022 15.57142857 17.04761905 15.6 14.24
43 26/10/2022 20.14285714 22.0952381 17.6 12.32
Thank you in advance!
Alex
I am interested in getting the average of each week. For example, I have 2 data points in week 21.
I am not sure what to do.

Out of memory issue while running query in table with jsonb field in Postgresql-13

**QUERY:**
*SELECT first_table.claimno,
first_table.claseq,
first_table.clientname,
first_table.linseq,
first_table.batchdate,
first_table.deny_proc_code,
first_table.allow_proc_code,
first_table.predictions,
first_table.score,
second_table.deleted,
second_table.review
FROM (( SELECT t.id,
(t.data -> 'claimno'::text) AS claimno,
(t.data ->> 'claseq'::text) AS claseq,
(t.response ->> 'clientcode'::text) AS clientname,
to_timestamp((((t.response ->> 'timestamp'::text))::numeric)::double precision) AS batchdate,
(o.value ->> 'linseq'::text) AS linseq,
(o.value ->> 'deny_proc_code'::text) AS deny_proc_code,
(o.value ->> 'allow_proc_code'::text) AS allow_proc_code,
(k.value ->> 'action'::text) AS predictions,
((k.value ->> 'score'::text))::numeric AS score
FROM dummy t,
LATERAL jsonb_array_elements((t.response -> 'lines'::text)) o(value),
LATERAL jsonb_array_elements(
CASE
WHEN (jsonb_typeof((o.value -> 'flags'::text)) ~~ 'array'::text) THEN (o.value -> 'flags'::text)
ELSE '[{"key": "team_q:"}]'::jsonb
END) k(value)) first_table
JOIN ( SELECT (k.value ->> 'deleted'::text) AS deleted,
(k.value ->> 'review'::text) AS review,
(k.value ->> 'claseq'::text) AS claseq,
(t.data -> 'claimno'::text) AS claimno,
(k.value ->> 'linseq'::text) AS linseq
FROM dummy t,
LATERAL jsonb_array_elements((t.data -> 'lines'::text)) o(value),
LATERAL jsonb_array_elements(
CASE
WHEN (jsonb_typeof((o.value -> 'flags'::text)) ~~ 'array'::text) THEN (o.value -> 'flags'::text)
ELSE '[{"key": "team_q:"}]'::jsonb
END) k(value)) second_table ON (((first_table.claseq = second_table.claseq) AND (first_table.linseq = second_table.linseq))));*
ERROR MESSAGE:
postgres=#
WARNING: terminating connection because of crash of another server process
DETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
HINT: In a moment you should be able to reconnect to the database and repeat your command.
server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
The connection to the server was lost. Attempting reset: Failed
$dmsg
[10881799.496693] [41883] 1983985914 41883 28630 272 59 0 0 sshd
[10881799.497029] [41892] 1983985914 41892 29461 145 16 0 0 bash
[10881799.497373] [42010] 0 42010 29173 304 63 0 0 sshd
[10881799.497698] [42237] 0 42237 54438 477 63 0 0 dzdo
[10881799.498037] [42238] 0 42238 49670 189 55 0 0 su
[10881799.498366] [42239] 26 42239 29983 145 17 0 0 bash
[10881799.498707] [42286] 1983984643 42286 28630 263 59 0 0 sshd
[10881799.499036] [42287] 0 42287 29173 305 61 0 0 sshd
[10881799.499384] [42297] 26 42297 46729 259 45 0 0 psql
[10881799.499735] [42298] 26 42298 105479 1044 83 196 0 postmaster
[10881799.500076] [42307] 1983984643 42307 28630 276 58 0 0 sshd
[10881799.500448] [42308] 1983984643 42308 29461 109 16 0 0 bash
[10881799.500796] [42319] 1983984643 42319 6061 141 18 0 0 sftp-server
[10881799.501171] [42663] 0 42663 27023 24 10 0 0 tail
[10881799.501538] [42794] 0 42794 30815 339 63 0 0 sshd
[10881799.501896] [42830] 0 42830 30815 325 61 0 0 sshd
[10881799.502277] [42831] 0 42831 29992 135 16 0 0 bash
[10881799.502656] [43249] 0 43249 29173 305 62 0 0 sshd
[10881799.503026] [43358] 0 43358 94257 4873 151 0 -900 rhsmd
[10881799.503412] [43474] 1983985914 43474 28630 297 58 0 0 sshd
[10881799.503793] [43484] 1983985914 43484 29461 129 15 0 0 bash
[10881799.504163] [43832] 89 43832 23066 299 47 0 0 cleanup
[10881799.504564] [43833] 89 43833 23030 297 48 0 0 trivial-rewrite
[10881799.504957] [43840] 89 43840 23081 273 46 0 0 smtp
[10881799.505342] [43927] 0 43927 54438 480 60 0 0 dzdo
[10881799.505716] [43934] 0 43934 49670 192 55 0 0 su
[10881799.506103] [43935] 26 43935 29983 146 16 0 0 bash
[10881799.506480] [43998] 26 43998 46729 259 44 0 0 psql
[10881799.506860] [43999] 26 43999 105475 1331 99 195 0 postmaster
[10881799.507244] [44569] 26 44569 104789 807 77 244 0 postmaster
[10881799.507616] Out of memory: Kill process 37366 (postmaster) score 474 or sacrifice child
[10881799.508019] Killed process 37366 (postmaster), UID 26, total-vm:5169780kB, anon-rss:4639752kB, file-rss:656kB, shmem-rss:133972kB
-bash-4.2$

kdb - KDB Apply logic where column exists - data validation

I'm trying to perform some simple logic on a table but I'd like to verify that the columns exists prior to doing so as a validation step. My data consists of standard table names though they are not always present in each data source.
While the following seems to work (just validating AAA at present) I need to expand to ensure that PRI_AAA (and eventually many other variables) is present as well.
t: $[`AAA in cols `t; temp: update AAA_VAL: AAA*AAA_PRICE from t;()]
Two part question
This seems quite tedious for each variable (imagine AAA-ZZZ inputs and their derivatives). Is there a clever way to leverage a dictionary (or table) to see if a number of variables exists or insert a place holder column of zeros if they do not?
Similarly, can we store a formula or instructions to to apply within a dictionary (or table) to validate and return a calculation (i.e. BBB_VAL: BBB*BBB_PRICE.) Some calculations would be dependent on others (i.e. BBB_Tax_Basis = BBB_VAL - BBB_COSTS costs for example so there could be iterative issues.
Thank in advance!
A functional update may be the best way to achieve this if your intention is to update many columns of a table in a similar fashion.
func:{[t;x]
if[not x in cols t;t:![t;();0b;(enlist x)!enlist 0]];
:$[x in cols t;
![t;();0b;(enlist`$string[x],"_VAL")!enlist(*;x;`$string[x],"_PRICE")];
t;
];
};
This function will update t with *_VAL columns for any column you pass as an argument, while first also adding a zero column for any missing columns passed as an argument.
q)t:([]AAA:10?100;BBB:10?100;CCC:10?100;AAA_PRICE:10*10?10;BBB_PRICE:10*10?10;CCC_PRICE:10*10?10;DDD_PRICE:10*10?10)
q)func/[t;`AAA`BBB`CCC`DDD]
AAA BBB CCC AAA_PRICE BBB_PRICE CCC_PRICE DDD_PRICE AAA_VAL BBB_VAL CCC_VAL DDD DDD_VAL
---------------------------------------------------------------------------------------
70 28 89 10 90 0 0 700 2520 0 0 0
39 17 97 50 90 40 10 1950 1530 3880 0 0
76 11 11 0 0 50 10 0 0 550 0 0
26 55 99 20 60 80 90 520 3300 7920 0 0
91 51 3 30 20 0 60 2730 1020 0 0 0
83 81 7 70 60 40 90 5810 4860 280 0 0
76 68 98 40 80 90 70 3040 5440 8820 0 0
88 96 30 70 0 80 80 6160 0 2400 0 0
4 61 2 70 90 0 40 280 5490 0 0 0
56 70 15 0 50 30 30 0 3500 450 0 0
As you've already mentioned, to cover point 2, a dictionary of functions might be the best way to go.
q)dict:raze{(enlist`$string[x],"_VAL")!enlist(*;x;`$string[x],"_PRICE")}each`AAA`BBB`DDD
q)dict
AAA_VAL| * `AAA `AAA_PRICE
BBB_VAL| * `BBB `BBB_PRICE
DDD_VAL| * `DDD `DDD_PRICE
And then a slightly modified function...
func:{[dict;t;x]
if[not x in cols t;t:![t;();0b;(enlist x)!enlist 0]];
:$[x in cols t;
![t;();0b;(enlist`$string[x],"_VAL")!enlist(dict`$string[x],"_VAL")];
t;
];
};
yields a similar result.
q)func[dict]/[t;`AAA`BBB`DDD]
AAA BBB CCC AAA_PRICE BBB_PRICE CCC_PRICE DDD_PRICE AAA_VAL BBB_VAL DDD DDD_VAL
-------------------------------------------------------------------------------
70 28 89 10 90 0 0 700 2520 0 0
39 17 97 50 90 40 10 1950 1530 0 0
76 11 11 0 0 50 10 0 0 0 0
26 55 99 20 60 80 90 520 3300 0 0
91 51 3 30 20 0 60 2730 1020 0 0
83 81 7 70 60 40 90 5810 4860 0 0
76 68 98 40 80 90 70 3040 5440 0 0
88 96 30 70 0 80 80 6160 0 0 0
4 61 2 70 90 0 40 280 5490 0 0
56 70 15 0 50 30 30 0 3500 0 0
Here's another approach which handles dependent/cascading calculations and also figures out which calculations are possible or not depending on the available columns in the table.
q)show map:`AAA_VAL`BBB_VAL`AAA_RevenueP`AAA_RevenueM`BBB_Other!((*;`AAA;`AAA_PRICE);(*;`BBB;`BBB_PRICE);(+;`AAA_Revenue;`AAA_VAL);(%;`AAA_RevenueP;1e6);(reciprocal;`BBB_VAL));
AAA_VAL | (*;`AAA;`AAA_PRICE)
BBB_VAL | (*;`BBB;`BBB_PRICE)
AAA_RevenueP| (+;`AAA_Revenue;`AAA_VAL)
AAA_RevenueM| (%;`AAA_RevenueP;1000000f)
BBB_Other | (%:;`BBB_VAL)
func:{c:{$[0h=type y;.z.s[x]each y;-11h<>type y;y;y in key x;.z.s[x]each x y;y]}[y]''[y];
![x;();0b;where[{all in[;cols x]r where -11h=type each r:(raze/)y}[x]each c]#c]};
q)t:([] AAA:1 2 3;AAA_PRICE:1 2 3f;AAA_Revenue:10 20 30;BBB:4 5 6);
q)func[t;map]
AAA AAA_PRICE AAA_Revenue BBB AAA_VAL AAA_RevenueP AAA_RevenueM
---------------------------------------------------------------
1 1 10 4 1 11 1.1e-05
2 2 20 5 4 24 2.4e-05
3 3 30 6 9 39 3.9e-05
/if the right columns are there
q)t:([] AAA:1 2 3;AAA_PRICE:1 2 3f;AAA_Revenue:10 20 30;BBB:4 5 6;BBB_PRICE:4 5 6f);
q)func[t;map]
AAA AAA_PRICE AAA_Revenue BBB BBB_PRICE AAA_VAL BBB_VAL AAA_RevenueP AAA_RevenueM BBB_Other
--------------------------------------------------------------------------------------------
1 1 10 4 4 1 16 11 1.1e-05 0.0625
2 2 20 5 5 4 25 24 2.4e-05 0.04
3 3 30 6 6 9 36 39 3.9e-05 0.02777778
The only caveat is that your map can't have the same column name as both the key and in the value of your map, aka cannot re-use column names. And it's assumed all symbols in your map are column names (not global variables) though it could be extended to cover that
EDIT: if you have a large number of column maps then it will be easier to define it in a more vertical fashion like so:
map:(!). flip(
(`AAA_VAL; (*;`AAA;`AAA_PRICE));
(`BBB_VAL; (*;`BBB;`BBB_PRICE));
(`AAA_RevenueP;(+;`AAA_Revenue;`AAA_VAL));
(`AAA_RevenueM;(%;`AAA_RevenueP;1e6));
(`BBB_Other; (reciprocal;`BBB_VAL))
);

Is graycomatrix's NumLevels and GrayLimits the same thing MATLAB

Ive been looking at implementing GLCM within MATLAB using graycomatrix. There are two arguments that I have discovered (NumLevels and GrayLimits) but in in my research and implementation they seem to achieve the same result.
GrayLimits specified bins between a range set [low high], causing a restricted set of gray levels.
NumLevels declares the number of gray levels in an image.
Could someone please explain the difference between these two arguments, as I don't understand why there would be two arguments that achieve the same result.
From the documentation:
'GrayLimits': Range used scaling input image into gray levels, specified as a two-element vector [low high]. If N is the number of gray levels (see parameter 'NumLevels') to use for scaling, the range [low high] is divided into N equal width bins and values in a bin get mapped to a single gray level.
'NumLevels': Number of gray levels, specified as an integer.
Thus the first parameter sets the input gray level range to be used (defaults to the min and max values in the image), and the second parameter sets the number of unique gray levels considered (and thus the size of the output matrix, defaults to 8, or 2 for binary images).
For example:
>> graycomatrix(img,'NumLevels',8,'GrayLimits',[0,255])
ans =
17687 1587 81 31 7 0 0 0
1498 7347 1566 399 105 8 0 0
62 1690 3891 1546 298 38 1 0
12 335 1645 4388 1320 145 4 0
2 76 305 1349 4894 959 18 0
0 16 40 135 965 7567 415 0
0 0 0 2 15 421 2410 0
0 0 0 0 0 0 0 0
>> graycomatrix(img,'NumLevels',8,'GrayLimits',[0,127])
ans =
1 9 0 0 0 0 0 0
7 17670 1431 156 50 31 23 15
1 1369 3765 970 350 142 84 92
0 128 1037 1575 750 324 169 167
0 46 361 836 1218 747 335 260
0 16 163 330 772 1154 741 547
0 10 74 150 370 787 1353 1208
0 4 67 136 294 539 1247 21199
>> graycomatrix(img,'NumLevels',4,'GrayLimits',[0,255])
ans =
28119 2077 120 0
2099 11470 1801 5
94 1829 14385 433
0 2 436 2410
As you can see, these parameters modify the output in different ways:
In the first case above, the range [0,255] was mapped to columns/rows 1-8, putting 32 different input grey values into each.
In the second case, the smaller range [0,127] was mapped to 8 indices, putting 16 different input grey values into each, and putting the remaining grey values 128-255 into the 8th index.
In the third case, the range [0,255] was mapped to 4 indices, putting 64 different input grey values into each.