Find the number of days in each month between 2 given dates - tableau-api

My input is :
catg_desc equipment_number present_date
STANDBY 123 24-06-2018
OTHERS 123 21-04-2019
READY 123 26-04-2019
JOB 256 26-04-2019
I have solved the scenario in postgresql but is multiplying the number of records. I don't want to increase the number of records in the final table as that can go upto 35,000,000 and difficult to handle in tableau.
using generate_series, we are inserting the data of missing months.
Expected Output:
catg_desc equipment_number present_date Mon-yy no of days
STANDBY 123 24-06-2018 Jun-18 7
STANDBY 124 24-06-2018 Jul-18 31
STANDBY 125 24-06-2018 Aug-18 31
STANDBY 126 24-06-2018 Sep-18 30
STANDBY 127 24-06-2018 Oct-18 31
STANDBY 128 24-06-2018 Nov-18 30
STANDBY 129 24-06-2018 Dec-18 31
STANDBY 130 24-06-2018 Jan-19 31
STANDBY 131 24-06-2018 Feb-19 28
STANDBY 132 24-06-2018 Mar-19 31
STANDBY 133 24-06-2018 Apr-19 20
OTHERS 123 24-06-2018 Apr-19 5
READY 123 26-04-2019 Apr-19 30
READY 124 26-04-2019 May-19 22 (till current date)
JOB 256 26-04-2019 Apr-19 5
JOB 256 26-04-2019 May-19 22 (till current date)

If you have 2 date fields then use datediff(). create a calculated field as below
DATEDIFF('day',[Startdate],[enddate)

Related

how works commit timestamp internally

those files are present in folder /pg_commit_ts
-rw-------. 1 postgres postgres 262144 Jun 17 12:56 0000
-rw-------. 1 postgres postgres 262144 Jun 17 12:56 0001
-rw-------. 1 postgres postgres 262144 Jun 17 12:57 0002
...
Are thoses files created only if track_commit_timestamp is on?
Yes, these files are only created if track_commit_timestamp = on. You cannot get the last committed statement, but you can use pg_last_committed_xact() to get the timestamp and transaction ID of the last committed transaction (see the documentation).

What is the schema of Kerberos database?

I dumped a kerberos database with
$ kdb5_util dump /User/user/kerberos/dbdump
In the file, each line has information of principals as
princ 38 23 4 3 0 PRINCIPAL#REALM 4224 86400 604800 0 0 0 1618454069 0 3 40 XXX 2 25 XXX 8 2 0100 1 4 XXX 1 7 18 62 XXX 1 7 17 46 XXX 1 7 16 54 XXX -1;
However, I cannot figure out what each column means.
I want to find locked principals from this database.
How can I get the schema of a dumped kerberos database?
I sent an email to kerberos, and it says that supporting a document of the full dump file is on a todo list.
Instead, it suggests to use tabdump, which supports various dumptypes: https://web.mit.edu/kerberos/krb5-devel/doc/admin/admin_commands/kdb5_util.html#tabdump

Spark SQL group data by range and trigger alerts

I am processing the data stream from Kafka using structured streaming with pyspark. I want to publish alerts to Kafka if the readings are abnormal in avro format
source temperature timestamp
1001 21 4/28/2019 10:25
1001 22 4/28/2019 10:26
1001 23 4/28/2019 10:27
1001 24 4/28/2019 10:28
1001 25 4/28/2019 10:29
1001 34 4/28/2019 10:30
1001 37 4/28/2019 10:31
1001 36 4/28/2019 10:32
1001 38 4/28/2019 10:33
1001 40 4/28/2019 10:34
1001 41 4/28/2019 10:35
1001 42 4/28/2019 10:36
1001 45 4/28/2019 10:37
1001 47 4/28/2019 10:38
1001 50 4/28/2019 10:39
1001 41 4/28/2019 10:40
1001 42 4/28/2019 10:41
1001 45 4/28/2019 10:42
1001 47 4/28/2019 10:43
1001 50 4/28/2019 10:44
Transform
source range count alert
1001 21-25 5 HIGH
1001 26-30 5 MEDIUM
1001 40-45 5 MEDIUM
1001 45-50 5 HIGH
I have defined a window function with 20 sec and 1 sec sliding. I am able to publish alerts with simple where condition but I am not able to tranform the data frame like above and trigger alerts if the count is 20 for any alert priority (all records in a window are matches with any priority HIGH->count(20) etc). Can any one suggest how to do this?
Also I am able to publish data using json format but not able to generate using AVRO. Scala and Java has to_avro() function but pyspark doesn't have any.
Appreciate your response
I am able to solve this problem using Bucketizer feature transfrom from ml library in spark.
How to bin in PySpark?

bootloader lock failure standard unlocking commands not working thinking its hash failure or sst key corrupted

I do not know why but when y friend gets inebriated she like to hook her phone up to a PC and play with it. she has a basic knowledge of ADB and fastboot commmand and i verified with her what was thrown. When she went to re-lock the bootloader it did not with thisI did. she downloaded Google minimal sdk tools to get the updated ADB and Fastboot then went all the way and got mfastboot from Motorola to insure parsing for flashing. All of these fastboot packages were also tested on Mac and Linux Ubuntu, on Windows 8.1 Pro N Update 1 and Windows 7 Professional N SP2 (all x64). Resulted in the same errors. She is super thorough and I only taught here how to manually erase and flash no scripts or tool kits.
fastboot oem lock
and it returned.
(bootloader) FAIL: Please run fastboot oem lock begin first!
(bootloader) sst lock failure!
FAILED (remote failure)
finished. total time: 0.014s
Then tried again, then again, and then yep again. At this this point she either read the log and followed it. personally though I think based on the point she starts playing with phones it more likely she started to panic because she needs the bootloader locked for work and started attempting to flash.
fastboot oem lock begin
and it returned.
M:\SHAMU\FACTORY IMAGE\shamu-lmy47z>fastboot oem lock begin
...
(bootloader) Ready to flash signed images
OKAY [ 0.121s]
finished. total time: 0.123s
FACTORY IMAGE\shamu-lmy47z>fastboot flash boot boot.img
target reported max download size of 536870912 bytes
sending 'boot' (7731 KB)...
OKAY [ 0.252s]
writing 'boot'...
(bootloader) Preflash validation failed
FAILED (remote failure)
finished. total time: 0.271s
Then the bootloader log stated
cmd: oem lock
hab check failed for boot
failed to validate boot image
upon flashing boot.img the Bootloader Logs lists "Mismatched partition size (boot)".
intresting sometimes it returns
fastboot oem lock begin
...
(bootloader) Ready to flash signed images
OKAY [ 0.121s]
finished. total time: 0.123s
fastboot flash boot boot.img
target reported max download size of 536870912 bytes
sending 'boot' (7731 KB)...
OKAY [ 0.252s]
writing 'boot'...
(bootloader) Preflash validation failed
FAILED (remote failure)
finished. total time: 0.271s
I logged the partitions to see if they are zeroed out indicating bad emmc but they are not.
cat /proc/partitions
cat /proc/partitions
major minor #blocks name
179 0 61079552 mmcblk0
179 1 114688 mmcblk0p1
179 2 16384 mmcblk0p2
179 3 384 mmcblk0p3
179 4 56 mmcblk0p4
179 5 16 mmcblk0p5
179 6 32 mmcblk0p6
179 7 1024 mmcblk0p7
179 8 256 mmcblk0p8
179 9 512 mmcblk0p9
179 10 500 mmcblk0p10
179 11 4156 mmcblk0p11
179 12 384 mmcblk0p12
179 13 1024 mmcblk0p13
179 14 256 mmcblk0p14
179 15 512 mmcblk0p15
179 16 500 mmcblk0p16
179 17 4 mmcblk0p17
179 18 512 mmcblk0p18
179 19 1024 mmcblk0p19
179 20 1024 mmcblk0p20
179 21 1024 mmcblk0p21
179 22 1024 mmcblk0p22
179 23 16384 mmcblk0p23
179 24 16384 mmcblk0p24
179 25 2048 mmcblk0p25
179 26 32768 mmcblk0p26
179 27 256 mmcblk0p27
179 28 32 mmcblk0p28
179 29 128 mmcblk0p29
179 30 8192 mmcblk0p30
179 31 1024 mmcblk0p31
259 0 2528 mmcblk0p32
259 1 1 mmcblk0p33
259 2 8 mmcblk0p34
259 3 16400 mmcblk0p35
259 4 9088 mmcblk0p36
259 5 16384 mmcblk0p37
259 6 262144 mmcblk0p38
259 7 65536 mmcblk0p39
259 8 1024 mmcblk0p40
259 9 2097152 mmcblk0p41
259 10 58351488 mmcblk0p42
179 32 4096 mmcblk0rpmb
254 0 58351488 dm-0
Ive asked for log or the total process to see the full warning, error, and failure message but she is super far on business. From what I do have and what literature i have started to crack. I am starting to believe from all my research and learnng about the android boot proccess. Maybe there is a missing or corrupted key in the SST table which is I beleieved called the bigtable to google. or a hash password failure when locking the bootloader security down or i could be way off please let me know. What I do not know how to investigate or disprove this issue to move on. Would I be able to get confirmation through a stack trace for missing or corrupted coding. So then it can be a puzzle thats solved. Honestly though this has become a puzzle that begs to be solved not an emergency thanks.
You should try "fastboot flashing lock" command instead.

Mongod resident memory usage low

I'm trying to debug some performance issues with a MongoDB configuration, and I noticed that the resident memory usage is sitting very low (around 25% of the system memory) despite the fact that there are occasionally large numbers of faults occurring. I'm surprised to see the usage so low given that MongoDB is so memory dependent.
Here's a snapshot of top sorted by memory usage. It can be seen that no other process is using an significant memory:
top - 21:00:47 up 136 days, 2:45, 1 user, load average: 1.35, 1.51, 0.83
Tasks: 62 total, 1 running, 61 sleeping, 0 stopped, 0 zombie
Cpu(s): 13.7%us, 5.2%sy, 0.0%ni, 77.3%id, 0.3%wa, 0.0%hi, 1.0%si, 2.4%st
Mem: 1692600k total, 1676900k used, 15700k free, 12092k buffers
Swap: 917500k total, 54088k used, 863412k free, 1473148k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
2461 mongodb 20 0 29.5g 564m 492m S 22.6 34.2 40947:09 mongod
20306 ubuntu 20 0 24864 7412 1712 S 0.0 0.4 0:00.76 bash
20157 root 20 0 73352 3576 2772 S 0.0 0.2 0:00.01 sshd
609 syslog 20 0 248m 3240 520 S 0.0 0.2 38:31.35 rsyslogd
20304 ubuntu 20 0 73352 1668 872 S 0.0 0.1 0:00.00 sshd
1 root 20 0 24312 1448 708 S 0.0 0.1 0:08.71 init
20442 ubuntu 20 0 17308 1232 944 R 0.0 0.1 0:00.54 top
I'd like to at least understand why the memory isn't being better utilized by the server, and ideally to learn how to optimize either the server config or queries to improve performance.
UPDATE:
It's fair that the memory usage looks high, which might lead to the conclusion it's another process. There's no other processes using any significant memory on the server; the memory appears to be consumed in the cache, but I'm not clear why that would be the case:
$free -m
total used free shared buffers cached
Mem: 1652 1602 50 0 14 1415
-/+ buffers/cache: 172 1480
Swap: 895 53 842
UPDATE:
You can see that the database is still page faulting:
insert query update delete getmore command flushes mapped vsize res faults locked db idx miss % qr|qw ar|aw netIn netOut conn set repl time
0 402 377 0 1167 446 0 24.2g 51.4g 3g 0 <redacted>:9.7% 0 0|0 1|0 217k 420k 457 mover PRI 03:58:43
10 295 323 0 961 592 0 24.2g 51.4g 3.01g 0 <redacted>:10.9% 0 14|0 1|1 228k 500k 485 mover PRI 03:58:44
10 240 220 0 698 342 0 24.2g 51.4g 3.02g 5 <redacted>:10.4% 0 0|0 0|0 164k 429k 478 mover PRI 03:58:45
25 449 359 0 981 479 0 24.2g 51.4g 3.02g 32 <redacted>:20.2% 0 0|0 0|0 237k 503k 479 mover PRI 03:58:46
18 469 337 0 958 466 0 24.2g 51.4g 3g 29 <redacted>:20.1% 0 0|0 0|0 223k 500k 490 mover PRI 03:58:47
9 306 238 1 759 325 0 24.2g 51.4g 2.99g 18 <redacted>:10.8% 0 6|0 1|0 154k 321k 495 mover PRI 03:58:48
6 301 236 1 765 325 0 24.2g 51.4g 2.99g 20 <redacted>:11.0% 0 0|0 0|0 156k 344k 501 mover PRI 03:58:49
11 397 318 0 995 395 0 24.2g 51.4g 2.98g 21 <redacted>:13.4% 0 0|0 0|0 198k 424k 507 mover PRI 03:58:50
10 544 428 0 1237 532 0 24.2g 51.4g 2.99g 13 <redacted>:15.4% 0 0|0 0|0 262k 571k 513 mover PRI 03:58:51
5 291 264 0 878 335 0 24.2g 51.4g 2.98g 11 <redacted>:9.8% 0 0|0 0|0 163k 330k 513 mover PRI 03:58:52
It appears this was being caused by a large amount of inactive memory on the server that wasn't be cleared for Mongo's use.
By looking at the result from:
cat /proc/meminfo
I could see a large amount of Inactive memory. Using this command as a sudo user:
free && sync && echo 3 > /proc/sys/vm/drop_caches && echo "" && free
Freed up the inactive memory, and over the next 24 hours I was able to see the resident memory of my Mongo instance increasing to consume the rest of the memory available on the server.
Credit to the following blog post for it's instructions:
http://tinylan.com/index.php/article/how-to-clear-inactive-memory-in-linux
MongoDB only uses as much memory as it needs, so if all of the data and indexes that are in MongoDB can fit inside what it's currently using you won't be able to push that anymore.
If the data set is larger than memory, there are a couple of considerations:
Check MongoDB itself to see how much data it thinks its using by running mongostat and looking at resident-memory
Was MongoDB re/started recently? If it's cold then the data won't be in memory until it gets paged in (leading to more page faults initially that gradually settle). Check out the touch command for more information on "warming MongoDB up"
Check your read ahead settings. If your system read ahead is too high then MongoDB can't efficiently use the memory on the system. For MongoDB a good number to start with is a setting of 32 (that's 16 KB of read ahead assuming you have 512 byte blocks)
I had the same issue: Windows Server 2008 R2, 16 Gb RAM, Mongo 2.4.3. Mongo uses only 2 Gb of RAM and generates a lot of page faults. Queries are very slow. Disk is idle, memory is free. Found no other solution than upgrade to 2.6.5. It helped.