how to fix ceph warning "storage filling up" - ceph

i have a cluster ceph and in monitoring tab dashboard show me warning "storage filling up"
alertname
storage filling up
description
Mountpoint /rootfs/run on ceph2-node-03.fns will be full in less than 5 days assuming the average fill-up rate of the past 48 hours.
but all devices is free
[root#ceph2-node-01 ~]# ceph osd df
ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS
0 hdd 0.01900 1.00000 20 GiB 61 MiB 15 MiB 0 B 44 MiB 20 GiB 0.30 0.92 0 up
3 ssd 0.01900 1.00000 20 GiB 69 MiB 15 MiB 5 KiB 53 MiB 20 GiB 0.33 1.04 1 up
1 hdd 0.01900 1.00000 20 GiB 76 MiB 16 MiB 6 KiB 60 MiB 20 GiB 0.37 1.15 0 up
4 ssd 0.01900 1.00000 20 GiB 68 MiB 15 MiB 3 KiB 52 MiB 20 GiB 0.33 1.03 1 up
2 hdd 0.01900 1.00000 20 GiB 66 MiB 16 MiB 6 KiB 50 MiB 20 GiB 0.32 1.00 0 up
5 ssd 0.01900 1.00000 20 GiB 57 MiB 15 MiB 5 KiB 41 MiB 20 GiB 0.28 0.86 1 up
TOTAL 120 GiB 396 MiB 92 MiB 28 KiB 300 MiB 120 GiB 0.32
MIN/MAX VAR: 0.86/1.15 STDDEV: 0.03
what should i do to fix this warning?
this is bug or ...?

Related

Merge rows (same values) in PostgreSQL based on row difference

I have a table my_tbl in my PostgreSQL 9.5 (x64 Windows) database, which contains data as shown below.
grp id low high avg
1 7 292 322 18.8
1 8 322 352 18.8
1 9 352 22 18.8
1 10 22 52 18.8
1 11 52 82 18.8
1 12 82 112 18.8
4 1 97 127 19.0
4 2 127 157 11.4
4 3 157 187 11.4
4 4 187 217 19.6
4 5 217 247 19.6
4 6 247 277 19.6
4 10 7 37 19.5
4 11 37 67 19.5
4 12 67 97 19.5
6 6 182 212 0.0
6 7 212 242 0.0
6 8 242 272 0.0
6 9 272 302 21.4
6 10 302 332 21.4
6 11 332 2 0.0
6 12 2 32 0.0
7 5 275 305 0.0
7 6 305 335 0.0
7 7 335 5 0.0
7 8 5 35 0.0
7 9 35 65 21.2
7 10 65 95 21.2
7 11 95 125 21.2
7 12 125 155 21.2
Now I would like to merge rows in the above data in the following way. For each grp, if the difference between avg values (the preceding and the succeeding ones) is zero (same value), then all such rows should be merged with the low value of the first row to high value of the last row (where merging should stop).
My expected output is:
grp id low high avg
1 {7,8,9,10,11,12} 292 112 18.8
4 {1} 97 127 19.0
4 {2,3} 127 187 11.4
4 {4,5,6} 187 277 19.6
4 {10,11,12} 7 97 19.5
6 {6,7,8} 182 272 0.0
6 {9,10} 272 332 21.4
6 {11,12} 332 32 0.0
7 {5,6,7,8} 275 35 0.0
7 {9,10,11,12} 35 155 21.2
Does someone care to help or suggest on how this could be achieved using SQL/PLPGSQL?
You can use ARRAY_AGG function and FIRST_VALUE function:
SELECT
grp, ARRAY_AGG(id) AS id, low, high, avg
FROM (
SELECT
grp,
id,
FIRST_VALUE(low) OVER (PARTITION BY grp, avg ORDER BY id) AS low,
FIRST_VALUE(high) OVER (PARTITION BY grp, avg ORDER BY id DESC) AS high,
avg
FROM my_tbl ORDER BY id
) t
GROUP BY grp, avg , low, high
ORDER BY grp;
DB Fiddle

Unrecognized Quartz MS font

I tried to generate the image with the Quartz MS font as follows.
Then use jTessBoxEditorFX to generate the box file, as follows.
0 26 23 97 125 0
1 169 26 189 122 0
2 209 23 279 124 0
3 305 23 370 124 0
4 391 25 461 121 0
5 481 23 551 124 0
6 571 23 641 124 0
7 665 27 731 124 0
8 753 24 822 124 0
9 842 24 912 125 0
The traineddata cannot be recognized normally. Is there any problem with my practice?
Whether there are similar experiences of seniors can guide me, thank you.

Creation of a loop loading values from .txt files

i have a problem creating a loop which loads each value from ".txt" files and uses it in some calculations.
All the values are on the 2nd column and the first one is always on the 9th line of each file.
Each ".txt" file contains a different number of values on its 2nd column (they all have the same text after the final value), so i want a loop that can read those values and stop whenever it finds that text)
Here is an example of these files ( the values that interest me are the ones under the headline of G (33,55,93...............,18) )
Latitude: 34°40'30" North,
Longitude: 3°16'6" East
Results for: April
Inclination of plane: 32 deg.
Orientation (azimuth) of plane: 0 deg.
Time G Gd Gc DNI DNIc A Ad Ac
05:52 33 33 25 0 0 233 64 311
06:07 55 44 47 246 361 356 105 473
06:22 93 59 92 312 459 444 124 590
06:37 136 73 147 366 538 514 138 684
06:52 183 86 207 410 602 572 150 760
07:07 232 98 271 447 656 620 160 823
07:22 283 110 337 478 701 659 168 874
16:37 283 110 337 478 701 659 168 874
16:52 232 98 271 447 656 620 160 823
17:07 183 86 207 410 602 572 150 760
17:22 136 73 147 366 538 514 138 684
17:37 93 59 92 312 459 444 124 590
17:52 55 44 47 246 361 356 105 473
18:07 33 33 25 0 0 233 64 311
18:22 18 18 14 0 0 9 8 7
G: Global irradiance on a fixed plane (W/m2)
Gd: Diffuse irradiance on a fixed plane (W/m2)
Gc: Global clear-sky irradiance on a fixed plane (W/m2)
DNI: Direct normal irradiance (W/m2)
DNIc: Clear-sky direct normal irradiance (W/m2)
A: Global irradiance on 2-axis tracking plane (W/m2)
Ad: Diffuse irradiance on 2-axis tracking plane (W/m2)
Ac: Global clear-sky irradiance on 2-axis tracking plane (W/m2)
PVGIS (c) European Communities, 2001-2012

Azure Virtual Machine Disk IOPS Performance vs AWS

I have a MongoDB replica set with approx. 200GB of data.
This currently exists in AWS on two medium.m3 instances (1 core, 3.7GB). I have a requirement to move this to Azure A2 instances (2 core, 3.5GB), however I am concerned about the performance.
In AWS I have a single disk per machine, 220GB SSD through EBS which delivers 660IOPS (or whatever this means in AWS speak).
According to Azure, I should get 500 IOPS per disk, so I thought performance would be comparable, however here are the results of mongoperf on Azure:
Azure mongoperf Output:
{ nThreads: 2, fileSizeMB: 1000, r: true }
creating test file size:1000MB ...
testing...
optoins:{ nThreads: 2, fileSizeMB: 1000, r: true }
wthr 2
new thread, total running : 1
read:1 write:0
64 ops/sec 0 MB/sec
82 ops/sec 0 MB/sec
85 ops/sec 0 MB/sec
111 ops/sec 0 MB/sec
95 ops/sec 0 MB/sec
106 ops/sec 0 MB/sec
96 ops/sec 0 MB/sec
112 ops/sec 0 MB/sec
new thread, total running : 2
read:1 write:0
188 ops/sec 0 MB/sec
195 ops/sec 0 MB/sec
223 ops/sec 0 MB/sec
137 ops/sec 0 MB/sec
222 ops/sec 0 MB/sec
212 ops/sec 0 MB/sec
200 ops/sec 0 MB/sec
Whilst my AWS medium.m3 instances perform totally different:
AWS mongoperf Output:
{ nThreads: 2, fileSizeMB: 1000, r: true }
creating test file size:1000MB ...
testing...
optoins:{ nThreads: 2, fileSizeMB: 1000, r: true }
wthr 2
new thread, total running : 1
read:1 write:0
3149 ops/sec 12 MB/sec
3169 ops/sec 12 MB/sec
3071 ops/sec 11 MB/sec
3044 ops/sec 11 MB/sec
2688 ops/sec 10 MB/sec
2880 ops/sec 11 MB/sec
3039 ops/sec 11 MB/sec
3020 ops/sec 11 MB/sec
new thread, total running : 2
read:1 write:0
3133 ops/sec 12 MB/sec
3044 ops/sec 11 MB/sec
3052 ops/sec 11 MB/sec
3016 ops/sec 11 MB/sec
2928 ops/sec 11 MB/sec
3041 ops/sec 11 MB/sec
3061 ops/sec 11 MB/sec
3025 ops/sec 11 MB/sec
How can I achieve the same performance through Azure as I do through AWS? I have looked at the D* instances which provide local SSD storage of 500GB, however these disks are ephemeral and so no good for hosting my database.
Edit: I can see that I can attach additional Premium Storage drives to the D* instances, however the costs for these are massive compared to AWS, looks like for high performance IO you still cannot beat AWS costs wise.
The approach I have taken towards this is to attach the maximum drives the server can support, for an A2 Standard this is 4 drives.
I have 4 x 200GB drives, these placed in a RAID0 array giving ~800GB storage.
The RAID0 allows me to combine the 500 IOPS total into 2000 IOPS theoretical max.
This now results in the following speed on the A2 machine from mongoperf, for some reason the single threaded performance is very low, including the test file write which happens at only 150 IOPS . At 10 threads the speed is exceeding the AWS instances, however I'm unsure if there is some kind of readahead caching going on here in Azure that would not apply in a real DB scenario. Performance on AWS does not alter with increased thread count.
Azure Performance:
{ nThreads: 10, fileSizeMB: 1000, r: true }
creating test file size:1000MB ...
testing...
optoins:{ nThreads: 10, fileSizeMB: 1000, r: true }
wthr 10
new thread, total running : 1
read:1 write:0
125 ops/sec 0 MB/sec
194 ops/sec 0 MB/sec
174 ops/sec 0 MB/sec
213 ops/sec 0 MB/sec
138 ops/sec 0 MB/sec
117 ops/sec 0 MB/sec
174 ops/sec 0 MB/sec
92 ops/sec 0 MB/sec
new thread, total running : 2
read:1 write:0
354 ops/sec 1 MB/sec
359 ops/sec 1 MB/sec
322 ops/sec 1 MB/sec
408 ops/sec 1 MB/sec
440 ops/sec 1 MB/sec
265 ops/sec 1 MB/sec
472 ops/sec 1 MB/sec
484 ops/sec 1 MB/sec
new thread, total running : 4
read:1 write:0
read:1 write:0
984 ops/sec 3 MB/sec
915 ops/sec 3 MB/sec
1419 ops/sec 5 MB/sec
1669 ops/sec 6 MB/sec
1934 ops/sec 7 MB/sec
1660 ops/sec 6 MB/sec
1348 ops/sec 5 MB/sec
1735 ops/sec 6 MB/sec
new thread, total running : 8
read:1 write:0
read:1 write:0
read:1 write:0
read:1 write:0
4041 ops/sec 15 MB/sec
5370 ops/sec 20 MB/sec
5643 ops/sec 22 MB/sec
5639 ops/sec 22 MB/sec
4388 ops/sec 17 MB/sec
6093 ops/sec 23 MB/sec
6350 ops/sec 24 MB/sec
6961 ops/sec 27 MB/sec
new thread, total running : 10
read:1 write:0
read:1 write:0
9684 ops/sec 37 MB/sec
11528 ops/sec 45 MB/sec
13807 ops/sec 53 MB/sec
16666 ops/sec 65 MB/sec
16306 ops/sec 63 MB/sec
24292 ops/sec 94 MB/sec
24264 ops/sec 94 MB/sec
19358 ops/sec 75 MB/sec
28067 ops/sec 109 MB/sec
43151 ops/sec 168 MB/sec
45165 ops/sec 176 MB/sec
44847 ops/sec 175 MB/sec
43806 ops/sec 171 MB/sec
43103 ops/sec 168 MB/sec
43477 ops/sec 169 MB/sec
44651 ops/sec 174 MB/sec
45365 ops/sec 177 MB/sec
41495 ops/sec 162 MB/sec
45281 ops/sec 176 MB/sec
47014 ops/sec 183 MB/sec
46056 ops/sec 179 MB/sec
45418 ops/sec 177 MB/sec
42363 ops/sec 165 MB/sec
43974 ops/sec 171 MB/sec
At the end with the high read IO this gives me very odd numbers from iostat:
Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn
sda 0.00 0.00 0.00 0 0
sdb 0.00 0.00 0.00 0 0
sdd 10885.07 43540.30 0.00 87516 0
sde 10958.21 43832.84 0.00 88104 0
sdf 10960.70 43842.79 0.00 88124 0
sdc 10920.40 43681.59 0.00 87800 0
md127 43722.89 174891.54 0.00 351532 0
However: When I do a mongoperf with reads AND writes, performs falls off a cliff, AWS speeds still remain identical.
Azure with read and write mongoperf
new thread, total running : 10
read:1 write:1
read:1 write:1
126 ops/sec 0 MB/sec
84 ops/sec 0 MB/sec
150 ops/sec 0 MB/sec
123 ops/sec 0 MB/sec
84 ops/sec 0 MB/sec
190 ops/sec 0 MB/sec
179 ops/sec 0 MB/sec
108 ops/sec 0 MB/sec
171 ops/sec 0 MB/sec
192 ops/sec 0 MB/sec
152 ops/sec 0 MB/sec
103 ops/sec 0 MB/sec
163 ops/sec 0 MB/sec
116 ops/sec 0 MB/sec
121 ops/sec 0 MB/sec
76 ops/sec 0 MB/sec

will there be any negative impact in using the option usePowerOf2Sizes

I am using mongodb for our Application .
I ran the db.setProfilingLevel(1,25) on the mongo shell to identify the slow queries from system.profile collection .
I observed that the read operations are fast , but the update operations are very slow
This is a sample of my mongostat
insert/s query/s update/s delete/s getmore/s command/s flushes/s mapped vsize res faults/s locked % idx miss % q t|r|w conn time
0 950 469 0 0 471 0 10396 12295 3207 27 34.9 0 0|0|0 152 07:18:49
0 838 418 0 0 422 0 10396 12295 3209 21 34.6 0 0|0|0 152 07:18:50
0 1005 502 0 0 504 0 10396 12295 3211 21 35.5 0 0|0|0 152 07:18:51
0 837 410 0 0 418 0 10396 12295 3212 20 35.7 0 0|0|0 152 07:18:52
0 754 377 0 0 379 0 10396 12295 3214 19 36.7 0 0|0|0 152 07:18:53
0 841 420 0 0 422 0 10396 12295 3216 24 35.9 0 0|0|0 152 07:18:54
0 877 438 0 0 442 0 10396 12295 3217 23 37.2 0 0|0|0 152 07:18:55
0 799 393 0 0 395 0 10396 12295 3219 21 37 0 0|0|0 152 07:18:56
0 947 471 0 0 479 0 10396 12295 3221 26 39 0 0|0|0 152 07:18:57
0 855 427 0 0 429 0 10396 12295 3222 24 38.4 0 0|0|0 152 07:18:58
0 1007 504 0 0 506 0 10396 12295 3224 31 36 0 0|0|0 152 07:18:59
0 841 413 0 0 417 0 10396 12295 3226 23 37.2 0 0|0|0 152 07:19:00
The stats are from dev environment , cant assume really from prod environment .
As per the architecture i cannot reduce the index size on that collection , but i saw that usePowerOf2Sizes can help me in this case in improving the write/update response time in mongodb .
I have heard lot of usePowerOf2Sizes , which says that
As usePowerOf2Sizes can reduce fragmentation .
all data will be set in mmemory and performance will be great .
With this option MongoDB will be able to more effectively reuse space.
usePowerOf2Sizes is useful for collections where you will be inserting and deleting large numbers of documents to ensure that MongoDB will effectively use space on disk.
I want to know if there will be any negative impact in using the option usePowerOf2Sizes ?? I have got 17 collections in my mongodb and want to use usePowerOf2Sizes for only one collection .
Please let me know , thanks in advance .