Azure Virtual Machine Disk IOPS Performance vs AWS - mongodb

I have a MongoDB replica set with approx. 200GB of data.
This currently exists in AWS on two medium.m3 instances (1 core, 3.7GB). I have a requirement to move this to Azure A2 instances (2 core, 3.5GB), however I am concerned about the performance.
In AWS I have a single disk per machine, 220GB SSD through EBS which delivers 660IOPS (or whatever this means in AWS speak).
According to Azure, I should get 500 IOPS per disk, so I thought performance would be comparable, however here are the results of mongoperf on Azure:
Azure mongoperf Output:
{ nThreads: 2, fileSizeMB: 1000, r: true }
creating test file size:1000MB ...
testing...
optoins:{ nThreads: 2, fileSizeMB: 1000, r: true }
wthr 2
new thread, total running : 1
read:1 write:0
64 ops/sec 0 MB/sec
82 ops/sec 0 MB/sec
85 ops/sec 0 MB/sec
111 ops/sec 0 MB/sec
95 ops/sec 0 MB/sec
106 ops/sec 0 MB/sec
96 ops/sec 0 MB/sec
112 ops/sec 0 MB/sec
new thread, total running : 2
read:1 write:0
188 ops/sec 0 MB/sec
195 ops/sec 0 MB/sec
223 ops/sec 0 MB/sec
137 ops/sec 0 MB/sec
222 ops/sec 0 MB/sec
212 ops/sec 0 MB/sec
200 ops/sec 0 MB/sec
Whilst my AWS medium.m3 instances perform totally different:
AWS mongoperf Output:
{ nThreads: 2, fileSizeMB: 1000, r: true }
creating test file size:1000MB ...
testing...
optoins:{ nThreads: 2, fileSizeMB: 1000, r: true }
wthr 2
new thread, total running : 1
read:1 write:0
3149 ops/sec 12 MB/sec
3169 ops/sec 12 MB/sec
3071 ops/sec 11 MB/sec
3044 ops/sec 11 MB/sec
2688 ops/sec 10 MB/sec
2880 ops/sec 11 MB/sec
3039 ops/sec 11 MB/sec
3020 ops/sec 11 MB/sec
new thread, total running : 2
read:1 write:0
3133 ops/sec 12 MB/sec
3044 ops/sec 11 MB/sec
3052 ops/sec 11 MB/sec
3016 ops/sec 11 MB/sec
2928 ops/sec 11 MB/sec
3041 ops/sec 11 MB/sec
3061 ops/sec 11 MB/sec
3025 ops/sec 11 MB/sec
How can I achieve the same performance through Azure as I do through AWS? I have looked at the D* instances which provide local SSD storage of 500GB, however these disks are ephemeral and so no good for hosting my database.
Edit: I can see that I can attach additional Premium Storage drives to the D* instances, however the costs for these are massive compared to AWS, looks like for high performance IO you still cannot beat AWS costs wise.

The approach I have taken towards this is to attach the maximum drives the server can support, for an A2 Standard this is 4 drives.
I have 4 x 200GB drives, these placed in a RAID0 array giving ~800GB storage.
The RAID0 allows me to combine the 500 IOPS total into 2000 IOPS theoretical max.
This now results in the following speed on the A2 machine from mongoperf, for some reason the single threaded performance is very low, including the test file write which happens at only 150 IOPS . At 10 threads the speed is exceeding the AWS instances, however I'm unsure if there is some kind of readahead caching going on here in Azure that would not apply in a real DB scenario. Performance on AWS does not alter with increased thread count.
Azure Performance:
{ nThreads: 10, fileSizeMB: 1000, r: true }
creating test file size:1000MB ...
testing...
optoins:{ nThreads: 10, fileSizeMB: 1000, r: true }
wthr 10
new thread, total running : 1
read:1 write:0
125 ops/sec 0 MB/sec
194 ops/sec 0 MB/sec
174 ops/sec 0 MB/sec
213 ops/sec 0 MB/sec
138 ops/sec 0 MB/sec
117 ops/sec 0 MB/sec
174 ops/sec 0 MB/sec
92 ops/sec 0 MB/sec
new thread, total running : 2
read:1 write:0
354 ops/sec 1 MB/sec
359 ops/sec 1 MB/sec
322 ops/sec 1 MB/sec
408 ops/sec 1 MB/sec
440 ops/sec 1 MB/sec
265 ops/sec 1 MB/sec
472 ops/sec 1 MB/sec
484 ops/sec 1 MB/sec
new thread, total running : 4
read:1 write:0
read:1 write:0
984 ops/sec 3 MB/sec
915 ops/sec 3 MB/sec
1419 ops/sec 5 MB/sec
1669 ops/sec 6 MB/sec
1934 ops/sec 7 MB/sec
1660 ops/sec 6 MB/sec
1348 ops/sec 5 MB/sec
1735 ops/sec 6 MB/sec
new thread, total running : 8
read:1 write:0
read:1 write:0
read:1 write:0
read:1 write:0
4041 ops/sec 15 MB/sec
5370 ops/sec 20 MB/sec
5643 ops/sec 22 MB/sec
5639 ops/sec 22 MB/sec
4388 ops/sec 17 MB/sec
6093 ops/sec 23 MB/sec
6350 ops/sec 24 MB/sec
6961 ops/sec 27 MB/sec
new thread, total running : 10
read:1 write:0
read:1 write:0
9684 ops/sec 37 MB/sec
11528 ops/sec 45 MB/sec
13807 ops/sec 53 MB/sec
16666 ops/sec 65 MB/sec
16306 ops/sec 63 MB/sec
24292 ops/sec 94 MB/sec
24264 ops/sec 94 MB/sec
19358 ops/sec 75 MB/sec
28067 ops/sec 109 MB/sec
43151 ops/sec 168 MB/sec
45165 ops/sec 176 MB/sec
44847 ops/sec 175 MB/sec
43806 ops/sec 171 MB/sec
43103 ops/sec 168 MB/sec
43477 ops/sec 169 MB/sec
44651 ops/sec 174 MB/sec
45365 ops/sec 177 MB/sec
41495 ops/sec 162 MB/sec
45281 ops/sec 176 MB/sec
47014 ops/sec 183 MB/sec
46056 ops/sec 179 MB/sec
45418 ops/sec 177 MB/sec
42363 ops/sec 165 MB/sec
43974 ops/sec 171 MB/sec
At the end with the high read IO this gives me very odd numbers from iostat:
Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn
sda 0.00 0.00 0.00 0 0
sdb 0.00 0.00 0.00 0 0
sdd 10885.07 43540.30 0.00 87516 0
sde 10958.21 43832.84 0.00 88104 0
sdf 10960.70 43842.79 0.00 88124 0
sdc 10920.40 43681.59 0.00 87800 0
md127 43722.89 174891.54 0.00 351532 0
However: When I do a mongoperf with reads AND writes, performs falls off a cliff, AWS speeds still remain identical.
Azure with read and write mongoperf
new thread, total running : 10
read:1 write:1
read:1 write:1
126 ops/sec 0 MB/sec
84 ops/sec 0 MB/sec
150 ops/sec 0 MB/sec
123 ops/sec 0 MB/sec
84 ops/sec 0 MB/sec
190 ops/sec 0 MB/sec
179 ops/sec 0 MB/sec
108 ops/sec 0 MB/sec
171 ops/sec 0 MB/sec
192 ops/sec 0 MB/sec
152 ops/sec 0 MB/sec
103 ops/sec 0 MB/sec
163 ops/sec 0 MB/sec
116 ops/sec 0 MB/sec
121 ops/sec 0 MB/sec
76 ops/sec 0 MB/sec

Related

how to fix ceph warning "storage filling up"

i have a cluster ceph and in monitoring tab dashboard show me warning "storage filling up"
alertname
storage filling up
description
Mountpoint /rootfs/run on ceph2-node-03.fns will be full in less than 5 days assuming the average fill-up rate of the past 48 hours.
but all devices is free
[root#ceph2-node-01 ~]# ceph osd df
ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS
0 hdd 0.01900 1.00000 20 GiB 61 MiB 15 MiB 0 B 44 MiB 20 GiB 0.30 0.92 0 up
3 ssd 0.01900 1.00000 20 GiB 69 MiB 15 MiB 5 KiB 53 MiB 20 GiB 0.33 1.04 1 up
1 hdd 0.01900 1.00000 20 GiB 76 MiB 16 MiB 6 KiB 60 MiB 20 GiB 0.37 1.15 0 up
4 ssd 0.01900 1.00000 20 GiB 68 MiB 15 MiB 3 KiB 52 MiB 20 GiB 0.33 1.03 1 up
2 hdd 0.01900 1.00000 20 GiB 66 MiB 16 MiB 6 KiB 50 MiB 20 GiB 0.32 1.00 0 up
5 ssd 0.01900 1.00000 20 GiB 57 MiB 15 MiB 5 KiB 41 MiB 20 GiB 0.28 0.86 1 up
TOTAL 120 GiB 396 MiB 92 MiB 28 KiB 300 MiB 120 GiB 0.32
MIN/MAX VAR: 0.86/1.15 STDDEV: 0.03
what should i do to fix this warning?
this is bug or ...?

Unrecognized Quartz MS font

I tried to generate the image with the Quartz MS font as follows.
Then use jTessBoxEditorFX to generate the box file, as follows.
0 26 23 97 125 0
1 169 26 189 122 0
2 209 23 279 124 0
3 305 23 370 124 0
4 391 25 461 121 0
5 481 23 551 124 0
6 571 23 641 124 0
7 665 27 731 124 0
8 753 24 822 124 0
9 842 24 912 125 0
The traineddata cannot be recognized normally. Is there any problem with my practice?
Whether there are similar experiences of seniors can guide me, thank you.

Matlab find zero value with certain range

I have this matrix:
A =[22 22 142 142 142 92 92 92 0 0
0 109 109 151 151 151 23 23 149 149
0 0 0 152 152 152 38 38 0 0
0 13 13 113 113 113 119 119 119 0
0 8 8 8 84 84 14 14 14 0
0 0 144 144 144 0 0 0 66 66
139 139 139 34 34 34 0 0 0 0
0 0 64 64 64 128 128 59 59 59
83 83 83 65 65 65 67 67 67 0];
How can I find indices (row, column) from matrix with zero value respectively 2 or more?
You can use find as follows:
[r,c] = find(A==0)
[rows,cols] = ind2sub(size(A),find(A==0))
find gives you the indices and ind2sub converts them in column-row format.

will there be any negative impact in using the option usePowerOf2Sizes

I am using mongodb for our Application .
I ran the db.setProfilingLevel(1,25) on the mongo shell to identify the slow queries from system.profile collection .
I observed that the read operations are fast , but the update operations are very slow
This is a sample of my mongostat
insert/s query/s update/s delete/s getmore/s command/s flushes/s mapped vsize res faults/s locked % idx miss % q t|r|w conn time
0 950 469 0 0 471 0 10396 12295 3207 27 34.9 0 0|0|0 152 07:18:49
0 838 418 0 0 422 0 10396 12295 3209 21 34.6 0 0|0|0 152 07:18:50
0 1005 502 0 0 504 0 10396 12295 3211 21 35.5 0 0|0|0 152 07:18:51
0 837 410 0 0 418 0 10396 12295 3212 20 35.7 0 0|0|0 152 07:18:52
0 754 377 0 0 379 0 10396 12295 3214 19 36.7 0 0|0|0 152 07:18:53
0 841 420 0 0 422 0 10396 12295 3216 24 35.9 0 0|0|0 152 07:18:54
0 877 438 0 0 442 0 10396 12295 3217 23 37.2 0 0|0|0 152 07:18:55
0 799 393 0 0 395 0 10396 12295 3219 21 37 0 0|0|0 152 07:18:56
0 947 471 0 0 479 0 10396 12295 3221 26 39 0 0|0|0 152 07:18:57
0 855 427 0 0 429 0 10396 12295 3222 24 38.4 0 0|0|0 152 07:18:58
0 1007 504 0 0 506 0 10396 12295 3224 31 36 0 0|0|0 152 07:18:59
0 841 413 0 0 417 0 10396 12295 3226 23 37.2 0 0|0|0 152 07:19:00
The stats are from dev environment , cant assume really from prod environment .
As per the architecture i cannot reduce the index size on that collection , but i saw that usePowerOf2Sizes can help me in this case in improving the write/update response time in mongodb .
I have heard lot of usePowerOf2Sizes , which says that
As usePowerOf2Sizes can reduce fragmentation .
all data will be set in mmemory and performance will be great .
With this option MongoDB will be able to more effectively reuse space.
usePowerOf2Sizes is useful for collections where you will be inserting and deleting large numbers of documents to ensure that MongoDB will effectively use space on disk.
I want to know if there will be any negative impact in using the option usePowerOf2Sizes ?? I have got 17 collections in my mongodb and want to use usePowerOf2Sizes for only one collection .
Please let me know , thanks in advance .

subtracting two matrices in matlab, the negative values in result are substituted by zero

I have two matrices in matlab,
> IRwindow =
>
> **183** 171 150 125 137
138 167 184 173 152
105 114 141 167 185
148 113 105 115 141
186 183 147 112 105
>
> ILwindow =
>
> **201** 170 165 177 203
181 174 167 169 189
154 150 156 168 181
187 175 158 131 144
173 186 183 167 141
I want to subtract these two matrices element-wise and get the result; for example for first element (183 - 201= -18 ) BUT the output for this element gives zero. the outcome result will be as below:
> IRwindow - ILwindow
ans =
**0** 1 0 0 0
0 0 17 4 0
0 0 0 0 4
0 0 0 0 0
13 0 0 0 0
how could I keep the real results? without getting zero for negatives in my result-matrix
Run the following example code:
%# Create random matrices
X = randi(100, 5, 5);
Y = randi(100, 5, 5);
%# Convert to strictly non-negative format
X = uint8(X);
Y = uint8(Y);
%# Perform subtractions
A = X - Y;
%# Convert to double format
X = double(X);
Y = double(Y);
%# Perform subtraction
B = X - Y;
For a given sample run:
A =
0 15 36 0 0
0 0 0 0 3
0 0 0 25 0
13 0 15 0 0
0 49 0 0 14
while:
B =
-8 15 36 -4 -65
0 -47 -45 -11 3
-18 -17 -11 25 -52
13 -53 15 -15 -1
-35 49 -47 -8 14
You will notice that all the negative numbers in A have been replaced by 0, while the negative numbers in B are displayed correctly.
Stated simply: if you use a numerical format that is not able to store negative numbers, then Matlab truncates at 0. The solution is to convert to a format that is able to accomodate "real" numbers (or a close approximation thereof) such as double, or perhaps in your case one of the int formats may be more appropriate, such as int8, int16, int32 or int64.
Another option is to use single or double on the subtraction in one line as follows:
ans=double(IRwindow-ILwindow)
I dont get the same problem as you: I have this code:
IRwindow = [
183 171 150 125 137
138 167 184 173 152
105 114 141 167 185
148 113 105 115 141
186 183 147 112 105]
ILwindow = [
201 170 165 177 203
181 174 167 169 189
154 150 156 168 181
187 175 158 131 144
173 186 183 167 141]
IRwindow - ILwindow
and i get this output:
IRwindow =
183 171 150 125 137
138 167 184 173 152
105 114 141 167 185
148 113 105 115 141
186 183 147 112 105
ILwindow =
201 170 165 177 203
181 174 167 169 189
154 150 156 168 181
187 175 158 131 144
173 186 183 167 141
ans =
-18 1 -15 -52 -66
-43 -7 17 4 -37
-49 -36 -15 -1 4
-39 -62 -53 -16 -3
13 -3 -36 -55 -36
Check that you are creating your matrices are being created properly (as doubles and not as unsigned integers).