Get-PhysicalDisk Sorting Number as text - powershell

When I get results of Get-PhysicalDisk it appears that results are sorted as text and not as number:
Get-PhysicalDisk | Where-Object { $_.CanPool -eq $True } | Sort-Object -Property DeviceID
I get
Number OperationalStatus HealthStatus
10 SSD TRUE OK Healthy Auto-Select 6.82 TB
11 SSD TRUE OK Healthy Auto-Select 6.82 TB
12 SSD TRUE OK Healthy Auto-Select 6.82 TB
13 SSD TRUE OK Healthy Auto-Select 6.82 TB
14 SSD TRUE OK Healthy Auto-Select 6.82 TB
15 SSD TRUE OK Healthy Auto-Select 6.82 TB
16 SSD TRUE OK Healthy Auto-Select 6.82 TB
17 SSD TRUE OK Healthy Auto-Select 6.82 TB
18 SSD TRUE OK Healthy Auto-Select 6.82 TB
19 SSD TRUE OK Healthy Auto-Select 6.82 TB
20 SSD TRUE OK Healthy Auto-Select 6.82 TB
5 SSD TRUE OK Healthy Auto-Select 6.82 TB
6 SSD TRUE OK Healthy Auto-Select 6.82 TB
7 SSD TRUE OK Healthy Auto-Select 6.82 TB
8 SSD TRUE OK Healthy Auto-Select 6.82 TB
9 SSD TRUE OK Healthy Auto-Select 6.82 TB
So It appears that the sort is in the Text Order and not the number. How Do I order it as a number ?

.DeviceId is a string property so you would need to cast [int] using a calculated expression to sort them properly:
Get-PhysicalDisk -CanPool $true | Sort-Object { [int] $_.DeviceId }

Related

Grafana Timeout while querying large amount of logs from Loki

I have a Loki server running on AWS Graviton (arm, 4 vCPU, 8 GiB) configured as following:
common:
replication_factor: 1
ring:
kvstore:
store: etcd
etcd:
endpoints: ['127.0.0.1:2379']
storage_config:
boltdb_shipper:
active_index_directory: /opt/loki/index
cache_location: /opt/loki/index_cache
shared_store: s3
aws:
s3: s3://ap-south-1/bucket-name
limits_config:
enforce_metric_name: false
reject_old_samples: true
reject_old_samples_max_age: 168h # 7d
ingestion_rate_mb: 10
ingestion_burst_size_mb: 20
per_stream_rate_limit: 8MB
ingester:
lifecycler:
join_after: 30s
chunk_block_size: 10485760
compactor:
working_directory: /opt/loki/compactor
shared_store: s3
compaction_interval: 5m
schema_config:
configs:
- from: 2022-01-01
store: boltdb-shipper
object_store: s3
schema: v11
index:
prefix: loki_
period: 24h
table_manager:
retention_period: 360h #15d
retention_deletes_enabled: true
index_tables_provisioning: # unused
provisioned_write_throughput: 500
provisioned_read_throughput: 100
inactive_write_throughput: 1
inactive_read_throughput: 100
Ingestion is working fine and I'm able to query logs for long durations from streams with less data sizes. I'm also able to query small durations of logs for streams with TiBs of data.
I see the following error in Loki when I try to query 24h of data from a large data stream and Grafana timeout after 5 mins:
Feb 11 08:27:32 loki-01 loki[19490]: level=error ts=2022-02-11T08:27:32.186137309Z caller=retry.go:73 org_id=fake msg="error processing request" try=2 err="context canceled"
Feb 11 08:27:32 loki-01 loki[19490]: level=info ts=2022-02-11T08:27:32.186304708Z caller=metrics.go:92 org_id=fake latency=fast query="{filename=\"/var/log/server.log\",host=\"web-199\",ip=\"192.168.20.239\",name=\"web\"} |= \"attachDriver\"" query_type=filter range_type=range length=24h0m0s step=1m0s duration=0s status=499 limit=1000 returned_lines=0 throughput=0B total_bytes=0B
Feb 11 08:27:32 loki-01 loki[19490]: level=info ts=2022-02-11T08:27:32.23882892Z caller=metrics.go:92 org_id=fake latency=slow query="{filename=\"/var/log/server.log\",host=\"web-199\",ip=\"192.168.20.239\",name=\"web\"} |= \"attachDriver\"" query_type=filter range_type=range length=24h0m0s step=1m0s duration=59.813829694s status=400 limit=1000 returned_lines=153 throughput=326MB total_bytes=20GB
Feb 11 08:27:32 loki-01 loki[19490]: level=error ts=2022-02-11T08:27:32.238959314Z caller=scheduler_processor.go:199 org_id=fake msg="error notifying frontend about finished query" err="rpc error: code = Canceled desc = context canceled" frontend=192.168.5.138:9095
Feb 11 08:27:32 loki-01 loki[19490]: level=error ts=2022-02-11T08:27:32.23898877Z caller=scheduler_processor.go:154 org_id=fake msg="error notifying scheduler about finished query" err=EOF addr=192.168.5.138:9095
Query: {filename="/var/log/server.log",host="web-199",ip="192.168.20.239",name="web"} |= "attachDriver"
Is there a way to stream the results instead of waiting for the response? Can I optimize Loki to process such queries better?

Sum values from the previous N number of days in KDB?

I have a table with following two columns:
Initial Table
Date Value
-------------------
2019.01.01 | 150
2019.01.02 | 100
2019.01.04 | 200
2019.01.07 | 300
2019.01.08 | 100
2019.01.10 | 150
2019.01.14 | 200
2019.01.15 | 100
For each row, I would like to sum values from the previous N number of days. In this case, N = 5.
Resultant Table
Date Value Sum
------------------------
2019.01.01 | 150 | 150 (01 -> ..)
2019.01.02 | 100 | 250 (02 -> 01)
2019.01.04 | 200 | 450 (04 -> 01)
2019.01.07 | 300 | 600 (07 -> 02)
2019.01.08 | 100 | 600 (08 -> 04)
2019.01.10 | 150 | 550 (10 -> 07)
2019.01.14 | 200 | 350 (14 -> 10)
2019.01.15 | 100 | 450 (15 -> 10)
Query
t:([] Date: 2019.01.01 2019.01.02 2019.01.04 2019.01.07 2019.01.08 2019.01.10 2019.01.14 2019.01.15; Value: 150 100 200 300 100 150 200 100)
How can I go about doing that?
One way you could go about this is to use an update statement like below:
q)N:5
q)update Sum:sum each Value where each Date within/:flip(Date-N;Date)from t
Date Value Sum
--------------------
2019.01.01 150 150
2019.01.02 100 250
2019.01.04 200 450
2019.01.07 300 600
2019.01.08 100 600
2019.01.10 150 550
2019.01.14 200 350
2019.01.15 100 450
The within keyword checks each date in the Date column is within the window of the current date and the current date-N, which is possible with an each right.
q)flip(-5+t`Date;t`Date)
2018.12.27 2019.01.01
2018.12.28 2019.01.02
2018.12.30 2019.01.04
2019.01.02 2019.01.07
2019.01.03 2019.01.08
2019.01.05 2019.01.10
2019.01.09 2019.01.14
2019.01.10 2019.01.15
q)t[`Date]within/:flip(-5+t`Date;t`Date)
10000000b
11000000b
11100000b
01110000b
00111000b
00011100b
00000110b
00000111b
This will return a list of boolean lists, which can be turned to indexes using where each (each since its a list of list), then indexed back into Value.
q)where each t[`Date]within/:flip(-5+t`Date;t`Date)
,0
0 1
0 1 2
1 2 3
2 3 4
3 4 5
5 6
5 6 7
q)t[`Value]where each t[`Date]within/:flip(-5+t`Date;t`Date)
,150
150 100
150 100 200
100 200 300
200 300 100
300 100 150
150 200
150 200 100
Then using sum each you can sum each of the list of numbers to get your desired result.
q)sum each t[`Value]where each t[`Date]within/:flip(-5+t`Date;t`Date)
150 250 450 600 600 550 350 450
You could also achieve this using an update statement like the one below. It doesn't require the flip and so should execute faster.
q)N:5
q)delete s from update runningSum:s-0^s[Date bin neg[1]+Date-N] from update s:sums Value from t
Date Value runningSum
---------------------------
2019.01.01 150 150
2019.01.02 100 250
2019.01.04 200 450
2019.01.07 300 600
2019.01.08 100 600
2019.01.10 150 550
2019.01.14 200 350
2019.01.15 100 450
This works using sums on the Value column, and then bin to find the running count from N days prior.
The delete keyword then removes the summed Value column to obtain your required result
q)\t:1000 delete s from update runningSum:s-0^s[Date bin neg[1]+Date-N] from update s:sums Value from t
7
While the time difference between this answer and Elliot's is negligible for small values of N, for larger values e.g. 1000, this is faster
q)\t:1000 update Sum:sum each Value where each Date within/:flip(Date-1000;Date)from t
11
q)\t:1000 delete s from update runningSum:s-0^s[Date bin neg[1]+Date-1000] from update s:sums Value from t
7
It should be noted that this answer requires the date field to be sorted, where Elliot's does not.
Another slightly slower way would is to generate 0 values for all the dates that is in between the min and max Date.
Then can use moving sums, msums, to get the values for the past 5 days.
It first takes the min and max Date from the table and makes a list of the dates that span between them.
q)update t: 0^Value from ([]Date:{[x] x[0]+til 1+x[1]-x[0]} exec (min[Date], max Date) from t) lj `Date xkey t
Date Value t
--------------------
2019.01.01 150 150
2019.01.02 100 100
2019.01.03 0
2019.01.04 200 200
2019.01.05 0
2019.01.06 0
2019.01.07 300 300
2019.01.08 100 100
2019.01.09 0
2019.01.10 150 150
Then it adds them to the table and fills in the empty values. This will then work for only the previous N days, taking into account any missing data
q){[x] select from x where not null Value } update t: 5 msum 0^Value from ([]Date:{[x] x[0]+til 1+x[1]-x[0]} exec (min[Date], max Date) from t) lj `Date xkey t
Date Value t
--------------------
2019.01.01 150 150
2019.01.02 100 250
2019.01.04 200 450
2019.01.07 300 500
2019.01.08 100 600
2019.01.10 150 550
2019.01.14 200 350
2019.01.15 100 300
I would also be careful when using Value as a column name, as you can run into issues with the value keyword
I hope this answers your question
A window join is a pretty natural fit here. See: https://code.kx.com/v2/ref/wj/
q)wj1[-5 0+\:t`Date;`Date;t;(t;(sum;`Value))]
Date Value
----------------
2019.01.01 150
2019.01.02 250
2019.01.04 450
2019.01.07 600
2019.01.08 600
2019.01.10 550
2019.01.14 350
2019.01.15 450
To go back 5 observations rather than 5 calendar days you could do:
q)wj1[{(4 xprev x;x)}t`Date;`Date;t;(t;(sum;`Value))]
Date Value
----------------
2019.01.01 150
2019.01.02 250
2019.01.04 450
2019.01.07 750
2019.01.08 850
2019.01.10 850
2019.01.14 950
2019.01.15 850
You can use the moving window mwin function to achieve this:
mwin:{[f;w;l] f each {1_x,y}\[w#0n;`float$l]}
You can then set the function f to sum and get the desired results over the last w:5 days for the desired list of values l (here l:exec Value from t):
update Sum:(mwin[sum;5;] exec Value from t) from t
Date Value Sum
--------------------
2019.01.01 150 150
2019.01.02 100 250
2019.01.04 200 450
2019.01.07 300 750
2019.01.08 100 850
2019.01.10 150 850
2019.01.14 200 950
2019.01.15 100 850

Create PostgreSQL view to feed a chart generating tool having filter option

We need to create a postgres SQL view to generate a chart. Chart creating tool allow only a single SQL view as input. The chart has the filter option by studentname, cousecode and feecode. Other than the chart display, we need to show the sum of the total course fee and fee amount paid by all the students from the same view.
table1: student
id name address
1 John USA
2 Robert UK
3 Tinger NZ
table2: student_course
id std_id coursecode fee
1 1 CHEM 3000
2 1 PHY 4000
3. 1 BIO 2000
4. 2 CHEM 3000
5. 2 GEO 1500
6. 3 ENG 2000
table3: student_fees
id std_name coursecode feecode amount
1 1 CHEM BKFEE 100
2 1 CHEM SPFEE 140
3 1 CHEM MATFEE 250
4 1 PHY BKFEE 150
5 1 PHY SPFEE 200
6 1 BIO LBFEE 300
7 1 BIO MATFEE 350
9 1 BIO TECFEE 200
10 2 CHEM BKFEE 100
11 2 CHEM SPFEE 140
12 2 GEO BKFEE 150
13 3 ENG BKFEE 75
14 3 ENG SPFEE 140
15 3 ENG LBFEE 180
Am able to create a view like this. But this view is not enough for my operation. Because from this view I couldn't calculate the sum of the total course fee(course fee is repeating). In this case, the grouping will not work. Because of the need to filter the data by studentname,coursecode and feecode.
View:
id std_id coursecode course_fee feecode fee_amount
1 John CHEM 3000 BKFEE 100
2 John CHEM 3000 SPFEE 140
3 John CHEM 3000 MATFEE 250
4 John PHY 4000 BKFEE 150
5 John PHY 4000 SPFEE 200
6 John BIO 4000 LBFEE 300
7 John BIO 4000 MATFEE 350
8 John BIO 4000 TECFEE 200
9 Robert CHEM 3000 BKFEE 100
10 Robert CHEM 3000 SPFEE 140
11 Robert GEO 1500 BKFEE 150
12 Tinger ENG 2000 BKFEE 75
13 Tinger ENG 2000 SPFEE 140
14 Tinger ENG 2000 LBFEE 180
So in any way can we create a view like this ?
View:
id std_id coursecode course_fee feecode fee_amount
1 John CHEM 3000 BKFEE 100
2 John CHEM 0 SPFEE 140
3 John CHEM 0 MATFEE 250
4 John PHY 4000 BKFEE 150
5 John PHY 0 SPFEE 200
6 John BIO 4000 LBFEE 300
7 John BIO 0 MATFEE 350
8 John BIO 0 TECFEE 200
9 Robert CHEM 3000 BKFEE 100
10 Robert CHEM 0 SPFEE 140
11 Robert GEO 1500 BKFEE 150
12 Tinger ENG 2000 BKFEE 75
13 Tinger ENG 0 SPFEE 140
14 Tinger ENG 0 LBFEE 180
Any help appreciated...
I guess you are looking for rollup functionality in your view query i am shearing you 2 links fist one is for the basics how rollup works and the 2nd one is specific to Postgresql
first link , second link Hope this will help you
I have work out one demo for you please check rollup query
Not similar to the answer you are expecting but you can explore GROUPING SET
select name, sf.coursecode, amount, sum(fee)
from student s, student_course sc, student_fees sf
where s.id = sc.std_id
and sf.std_name = s.id
and sf.coursecode = sc.coursecode
group by
GROUPING SETS (
(name, sf.coursecode, amount, fee),
(name, sf.coursecode, fee),
()
)
order by name, sf.coursecode asc

Combine 2 data frames with different columns in spark

I have 2 dataframes:
df1 :
Id purchase_count purchase_sim
12 100 1500
13 1020 1300
14 1010 1100
20 1090 1400
21 1300 1600
df2:
Id click_count click_sim
12 1030 2500
13 1020 1300
24 1010 1100
30 1090 1400
31 1300 1600
I need to get the combined data frame with results as :
Id click_count click_sim purchase_count purchase_sim
12 1030 2500 100 1500
13 1020 1300 1020 1300
14 null null 1010 1100
24 1010 1100 null null
30 1090 1400 null null
31 1300 1600 null null
20 null null 1090 1400
21 null null 1300 1600
I can't use union because of different column names. Can some one suggest me a better way to do this ?
All you require a full outer join on ID column.
df1.join(df2, Seq("Id"), "full_outer")
// Since the Id column name is same in both the dataframes, if you use comparison like
df1($"Id") === df2($"Id"), you will get duplicate ID columns
Please refer the below documentation for future references.
https://docs.databricks.com/spark/latest/faq/join-two-dataframes-duplicated-column.html

Average of grouping columns

My table is something like this
id ...... amount...........food
+++++++++++++++++++++++++++++++++++++
1 ........ 5 ............. banana
1 ........ 4 ............. strawberry
2 ........ 2 ............. banana
2 ........ 7 ............. orange
2 ........ 8 ............. strawberry
3 ........ 10 .............lime
3 ........ 12 .............banana
What I want is a table display each food, with the average number of times it appears in each ID.
The table should look something like this I think:
food ........... avg............
++++++++++++++++++++++++++++++++
banana .......... 6.3 ............
strawberry ...... 6 ............
orange .......... 7 ............
lime ............ 10 ............
I'm not really sure on how to do this. If I use just avg(amount) then it will just add the whole amount column
Did you try GROUP BY?
SELECT food, AVG(amount) "avg"
FROM table1
GROUP BY food
Here is SQLFiddle
Output:
| food | avg |
|------------|-------------------|
| lime | 10 |
| orange | 7 |
| strawberry | 6 |
| banana | 6.333333333333333 |