How to debug SwiftUI Widget memory issue? - swift

I'm creating a widget in swiftui, with a prefilled mock json, stored locally.
Mock JSON has following data:
{
"id":"111",
"title":"some dummy title",
"date":"1609865285",
"thumbnail":"mock4"
}
and mock4 image is stored in the Assets folder.
I load the mock data in the timeline method as follows:
for mockItem in storyManager.getMockData() {
let item = WidgetFeedItem(newsData: mockItem)
items.append(item)
}
let entry = FeedItemEntry(date: Date(), items: entries)
if let nextDate = Calendar.current.date(byAdding: .minute, value: 15, to: Date()) {
let timeline = Timeline(entries: [entry], policy: .after(nextDate))
completion(timeline)
}
struct WidgetFeedItem: Hashable {
var newsTitle = ""
var newsDate = Date()
var newsID = ""
var newsimageURL = ""
var articleLink = ""
var deviceName = ""
init(newsData: NSDictionary) {
newsID = newsData.getStringForKey("id")
newsTitle = newsData.getStringForKey("title")
newsDate = newsData.getDateForKey("date")
deviceName = newsData.getStringForKey("device_name")
newsimageURL = newsData.getStringForKey("thumbnail")
articleLink = newsData.getStringForKey("link")
}
}
my widgetui is also a simple one:
var body: some View {
ZStack {
VStack {
header
Spacer()
largeBody
Rectangle().fill(Color("separatorColor")).frame(height: 1).padding(.horizontal, 20)
Spacer()
}
getHeaderIcon()
}
}
var header: some View {
ZStack {
if let url = getDeepLink(.none, .header) {
Link(destination: url) {
getHeaderBackground()
getHeaderLabel()
}
}
}
}
var largeBody: some View {
ForEach(entry.items, id: \.self) { item in
HStack {
if let url = getDeepLink(item, .storySubtitle) {
Link(destination: url) {
getImage(item)
getHeadlines(item)
Spacer()
}
}
}.padding(.leading, 20)
.padding(.trailing, 50)
}
}
func getHeaderBackground() -> some View {
VStack {
if isGreenApp() {
Rectangle().fill(LinearGradient(gradient: Gradient(colors: [Color("gr1"), Color("gr2")]), startPoint: .top, endPoint: .bottom)).frame(height: 36)
}
else {
Rectangle().fill(Color("WidgetBackground")).frame(height: 36)
}
}
}
func getHeaderLabel() -> some View {
VStack {
HStack {
Text(getHeaderTitle()).foregroundColor(.white).font(headerFont)
Spacer()
}.padding(.leading, 15)
}
}
func getHeaderIcon() -> some View {
VStack {
if let url = getDeepLink(.none, .header) {
Link(destination: url) {
HStack {
Spacer()
Image("headerIcon").resizable().frame(width: 35, height: 35, alignment: .center).padding(.trailing, 20).padding(.top, 20)
}
}
}
Spacer()
}
}
func getHeadlines(_ item: WidgetFeedItem?) -> some View {
VStack(alignment: .leading) {
Text(item?.newsTitle ?? "").font(titleFont).foregroundColor(Color("textColor")).fixedSize(horizontal: false, vertical: true).lineLimit(3)
Text(item?.newsDate.articleDateFormatForWidget() ?? "").font(subtitleFont).foregroundColor(Color("textColor"))
}
}
func getImage(_ item: WidgetFeedItem?) -> some View {
return HStack {
Image("newsImage")
.data(url: URL(string: item?.newsimageURL ?? "")!).frame(width: 80, height: 50)
}
}
Everything works great, except this setup crashing everytime I try to reload the widget, and observe the memory on Leaks Instrument.
Intially memory consumption is well below 15MB, but after second reload, it jumps to 30MB and crashes.
All the assets I have in local folder did NOT exceed more than 2.5MB and 1.0MB for the custom fonts.
Debuggin MemoryGraph:
I exported the memorygraph at the time of crash, and I got the following info:
Summary:
Physical footprint: 34.4M
Physical footprint (peak): 34.4M
----
ReadOnly portion of Libraries: Total=505.5M resident=218.4M(43%) swapped_out_or_unallocated=287.1M(57%)
Writable regions: Total=615.1M written=24.3M(4%) resident=33.2M(5%) swapped_out=0K(0%) unallocated=581.9M(95%)
VIRTUAL RESIDENT DIRTY SWAPPED VOLATILE NONVOL EMPTY REGION
REGION TYPE SIZE SIZE SIZE SIZE SIZE SIZE SIZE COUNT (non-coalesced)
=========== ======= ======== ===== ======= ======== ====== ===== =======
Activity Tracing 256K 32K 32K 0K 0K 32K 0K 1
CoreAnimation 16K 16K 16K 0K 0K 0K 0K 1
CoreUI image data 5632K 5632K 5632K 0K 0K 0K 0K 1
Foundation 784K 784K 784K 0K 0K 0K 0K 5
IOKit 224K 224K 224K 0K 0K 0K 0K 14
Kernel Alloc Once 32K 16K 16K 0K 0K 0K 0K 1
MALLOC guard page 128K 0K 0K 0K 0K 0K 0K 8
MALLOC metadata 240K 208K 208K 0K 0K 0K 0K 11
MALLOC_LARGE 1632K 1632K 1632K 0K 0K 0K 0K 30 see MALLOC ZONE table below
MALLOC_LARGE metadata 16K 16K 16K 0K 0K 0K 0K 1 see MALLOC ZONE table below
MALLOC_NANO 512.0M 2128K 2128K 0K 0K 0K 0K 1 see MALLOC ZONE table below
MALLOC_SMALL 48.0M 1008K 976K 0K 0K 0K 0K 6 see MALLOC ZONE table below
MALLOC_TINY 7168K 592K 576K 0K 0K 0K 0K 7 see MALLOC ZONE table below
Performance tool data 32.9M 19.8M 19.7M 0K 0K 0K 0K 10 not counted in TOTAL below
STACK GUARD 96K 0K 0K 0K 0K 0K 0K 6
Stack 3728K 304K 272K 0K 0K 0K 0K 6
Stack (reserved) 544K 0K 0K 0K 0K 0K 0K 1 reserved VM address space (unallocated)
Stack Guard 16K 0K 0K 0K 0K 0K 0K 1
VM_ALLOCATE 2608K 1504K 1488K 0K 0K 0K 0K 5
VM_ALLOCATE (reserved) 16K 0K 0K 0K 0K 0K 0K 1 reserved VM address space (unallocated)
__AUTH 1946K 1898K 82K 0K 0K 0K 0K 264
__AUTH_CONST 14.8M 8245K 3432 0K 0K 0K 0K 389
__DATA 9926K 5739K 944K 0K 0K 0K 0K 381
__DATA_CONST 12.9M 9.8M 80K 0K 0K 0K 0K 394
__DATA_DIRTY 1325K 1245K 661K 0K 0K 0K 0K 319
__FONT_DATA 4K 0K 0K 0K 0K 0K 0K 1
__LINKEDIT 152.2M 28.7M 0K 0K 0K 0K 0K 8
__OBJC_CONST 2883K 2883K 13K 0K 0K 0K 0K 237
__OBJC_RO 71.2M 55.0M 0K 0K 0K 0K 0K 1
__OBJC_RW 2896K 1805K 13K 0K 0K 0K 0K 1
__TEXT 353.3M 189.7M 80K 0K 0K 0K 0K 414
__UNICODE 588K 528K 0K 0K 0K 0K 0K 1
mapped file 31.9M 4592K 48K 0K 0K 0K 0K 9
shared memory 48K 48K 48K 0K 0K 0K 0K 3
unused but dirty shlib __DATA 268K 268K 268K 0K 0K 0K 0K 74
=========== ======= ======== ===== ======= ======== ====== ===== =======
TOTAL 1.2G 323.5M 15.9M 0K 0K 32K 0K 2603
TOTAL, minus reserved VM space 1.2G 323.5M 15.9M 0K 0K 32K 0K 2603
VIRTUAL RESIDENT DIRTY SWAPPED ALLOCATION BYTES DIRTY+SWAP REGION
MALLOC ZONE SIZE SIZE SIZE SIZE COUNT ALLOCATED FRAG SIZE % FRAG COUNT
=========== ======= ========= ========= ========= ========= ========= ========= ====== ======
DefaultMallocZone_0x10227c000 512.0M 2128K 2128K 0K 15522 888K 1240K 59% 1
MallocHelperZone_0x102254000 55.6M 3216K 3168K 0K 1056 2559K 609K 20% 43
QuartzCore_0x1023b0000 1024K 32K 32K 0K 7 1888 30K 95% 1
=========== ======= ========= ========= ========= ========= ========= ========= ====== ======
TOTAL 568.6M 5376K 5328K 0K 16585 3449K 1879K 36% 45
Heap Info:
Physical footprint: 34.4M
Physical footprint (peak): 34.4M
----
Process 409: 3 zones
All zones: 16585 nodes malloced - Sizes: 1024KB[1] 64KB[1] 48KB[1] 32KB[4] 16KB[23] 13KB[4] 12KB[1] 10KB[2] 9KB[1] 8.5KB[7] 8KB[18] 6.5KB[1] 5KB[5] 4.5KB[3] 4KB[13] 3.5KB[6] 3KB[13] 2.5KB[15] 2KB[25] 1.5KB[40] 1KB[20] 1008[1] 960[1] 944[1] 896[1] 880[1] 864[15] 848[1] 832[1] 800[9] 784[9] 768[2] 752[5] 688[6] 672[1] 656[32] 640[8] 624[1] 608[15] 592[2] 576[3] 544[22] 528[14] 512[52] 480[53] 464[12] 448[3] 432[12] 400[10] 384[18] 368[173] 352[60] 336[39] 320[12] 304[9] 288[35] 272[29] 256[76] 240[36] 224[21] 208[224] 192[66] 176[97] 160[140] 144[84] 128[523] 112[920] 96[617] 80[858] 64[1638] 48[3851] 32[5588] 16[974]
Found 491 ObjC classes
Found 483 Swift classes
Found 143 CFTypes
-----------------------------------------------------------------------
All zones: 16585 nodes (3531952 bytes)
COUNT BYTES AVG CLASS_NAME TYPE BINARY
===== ===== === ========== ==== ======
3327 1837840 552.4 non-object
1948 62336 32.0 Class.data (class_rw_t) C libobjc.A.dylib
1011 32352 32.0 NSMutableDictionary ObjC CoreFoundation
973 172160 176.9 NSMutableDictionary (Storage) C CoreFoundation
857 43008 50.2 CFString ObjC CoreFoundation
571 27408 48.0 Class.data.extended (class_rw_ext_t) C libobjc.A.dylib
458 120336 262.7 CFData ObjC CoreFoundation
454 21792 48.0 NSMutableArray ObjC CoreFoundation
437 13808 31.6 NSMutableArray (Storage) C CoreFoundation
346 17888 51.7 Swift closure context Swift <unknown>
267 44640 167.2 Class.methodCache._buckets (bucket_t) C libobjc.A.dylib
256 15120 59.1 __NSMallocBlock__ ObjC libsystem_blocks.dylib
231 7392 32.0 NSDictionary ObjC CoreFoundation
231 3696 16.0 NSDictionary.cow (struct __cow_state_t) C CoreFoundation
228 80304 352.2 NSDictionary (Storage) C CoreFoundation
176 5632 32.0 NSNumber ObjC CoreFoundation
134 8704 65.0 Class.data.methods (method_array_t) C libobjc.A.dylib
133 6784 51.0 _ContiguousArrayStorage<AGAttribute> Swift libswiftCore.dylib
104 13280 127.7 _ContiguousArrayStorage<DisplayList.Item> Swift libswiftCore.dylib
100 43312 433.1 NSDictionary ObjC CoreFoundation
88 8448 96.0 TypedElement<AccessibilityProperties.ViewTypeDescription> Swift SwiftUI
81 4560 56.3 __NSMallocBlock__ ObjC libsystem_blocks.dylib
80 3216 40.2 NSArray ObjC CoreFoundation
72 4608 64.0 TypedElement<AccessibilityProperties.TraitsKey> Swift SwiftUI
69 4384 63.5 _ContiguousArrayStorage<ViewTransform.Chunk> Swift libswiftCore.dylib
69 3312 48.0 Chunk Swift SwiftUI
69 3312 48.0 _ContiguousArrayStorage<ViewTransform.Chunk.Tag> Swift libswiftCore.dylib
68 3264 48.0 _ContiguousArrayStorage<CGFloat> Swift libswiftCore.dylib
66 1056 16.0 NSArray ObjC CoreFoundation
59 19360 328.1 CFDictionary (Value Storage) C CoreFoundation
58 5568 96.0 TypedElement<AccessibilityProperties.LabelKey> Swift SwiftUI
57 912 16.0 NSSet ObjC CoreFoundation
55 2640 48.0 NSKeyValueObservance ObjC Foundation
52 4352 83.7 CFString (Storage) C CoreFoundation
49 10192 208.0 UITraitCollection ObjC UIKitCore
47 22560 480.0 ResolvedStyledText Swift SwiftUI
47 16544 352.0 _ContiguousArrayStorage<AccessibilityNodeAttachment> Swift libswiftCore.dylib
47 2256 48.0 __SharedStringStorage Swift libswiftCore.dylib
47 1504 32.0 NSConcreteMutableAttributedString ObjC Foundation
47 752 16.0 NSMutableRLEArray ObjC Foundation
44 2816 64.0 CFDictionary ObjC CoreFoundation
44 1408 32.0 AGSubgraph CFType AttributeGraph
41 2624 64.0 TypedElement<AccessibilityProperties.InputLabelsKey> Swift SwiftUI
40 1840 46.0 Class.data.properties (property_array_t) C libobjc.A.dylib
36 13440 373.3 CFDictionary (Key Storage) C CoreFoundation
36 2304 64.0 OS_os_log ObjC libsystem_trace.dylib
35 4496 128.5 _ContiguousArrayStorage<PreferencesOutputs.KeyValue> Swift libswiftCore.dylib
35 1120 32.0 NSAttributeDictionaryEnumerator ObjC UIFoundation
34 2048 60.2 _ContiguousArrayStorage<AccessibilityNode> Swift libswiftCore.dylib
33 3696 112.0 CUIRenditionKey ObjC
I actually posted only part of heap log, due to body limit on SO.
How do I go forward from here and debug more and find why my widget is eating more memory ?
Note: It is only crashing whenever I try to profile in instruments. In normal cases, the memory consumption stayed below 12MB.

Sometimes profiler doesn't show some memory issues. You can try to look at the pictures. For example backgrounds: try to use 1px templates instead of large pictures:
And code:
.background(
Image("1px_background")
.resizable()
.aspectRatio(contentMode: .fill)
)

Related

how to fix ceph warning "storage filling up"

i have a cluster ceph and in monitoring tab dashboard show me warning "storage filling up"
alertname
storage filling up
description
Mountpoint /rootfs/run on ceph2-node-03.fns will be full in less than 5 days assuming the average fill-up rate of the past 48 hours.
but all devices is free
[root#ceph2-node-01 ~]# ceph osd df
ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS
0 hdd 0.01900 1.00000 20 GiB 61 MiB 15 MiB 0 B 44 MiB 20 GiB 0.30 0.92 0 up
3 ssd 0.01900 1.00000 20 GiB 69 MiB 15 MiB 5 KiB 53 MiB 20 GiB 0.33 1.04 1 up
1 hdd 0.01900 1.00000 20 GiB 76 MiB 16 MiB 6 KiB 60 MiB 20 GiB 0.37 1.15 0 up
4 ssd 0.01900 1.00000 20 GiB 68 MiB 15 MiB 3 KiB 52 MiB 20 GiB 0.33 1.03 1 up
2 hdd 0.01900 1.00000 20 GiB 66 MiB 16 MiB 6 KiB 50 MiB 20 GiB 0.32 1.00 0 up
5 ssd 0.01900 1.00000 20 GiB 57 MiB 15 MiB 5 KiB 41 MiB 20 GiB 0.28 0.86 1 up
TOTAL 120 GiB 396 MiB 92 MiB 28 KiB 300 MiB 120 GiB 0.32
MIN/MAX VAR: 0.86/1.15 STDDEV: 0.03
what should i do to fix this warning?
this is bug or ...?

RMarkdown: Creating two side-by-side heatmaps with full figure borders using the pheatmap package

I am writing my first report in RMarkdown and struggling with specific figure alignments.
I have some data that I am manipulating into a format friendly for the package pheatmap such that it produces heatmap HTML output. The code that produces one of these looks like:
cleaned_mayo<- cleaned_mayo[which(cleaned_mayo$Source=="MayoBrainBank_Dickson"),]
# Segregate data
ad<- cleaned_mayo[which(cleaned_mayo$Diagnosis== "AD"),-c(1:13)]
control<- cleaned_mayo[which(cleaned_mayo$Diagnosis== "Control"),-c(1:13)]
# Average data across patients and assign diagnoses
ad<- as.data.frame(t(apply(ad,2, mean)))
control<- as.data.frame(t(apply(control,2, mean)))
ad$Diagnosis<- "AD"
control$Diagnosis<- "Control"
# Combine
avg_heat<- rbind(ad, control)
# Rearrange columns
avg_heat<- avg_heat[,c(32, 1:31)]
# Mean shift all expression values
avg_heat[,2:32]<- apply(avg_heat[,2:32], 2, function(x){x-mean(x)})
#################################
# CREATE HEAT MAP
#################################
# Plot average heat map
pheatmap(t(avg_heat[,2:32]), cluster_col= F, labels_col= c("AD", "Control"),gaps_col = c(1), labels_row = colnames(avg_heat)[2:32],
main= "Mayo Differential Expression for Genes of Interest: Averaged Across \n Patients within a Diagnosis",
show_colnames = T)
Where the numeric columns of cleaned_mayo look like:
C1QA C1QC C1QB LAPTM5 CTSS FCER1G PLEK CSF1R CD74 LY86 AIF1 FGD2 TREM2 PTK2B LYN UNC93B1 CTSC NCKAP1L TMEM119 ALOX5AP LCP1
1924_TCX 1101 1392 1687 1380 380 279 198 1889 6286 127 252 771 338 5795 409 494 337 352 476 170 441
1926_TCX 881 770 950 1064 239 130 132 1241 3188 76 137 434 212 5634 327 419 292 217 464 124 373
1935_TCX 3636 4106 5196 5206 1226 583 476 5588 27650 384 1139 1086 756 14219 1269 869 868 1378 1270 428 1216
1925_TCX 3050 4392 5357 3585 788 472 350 4662 11811 340 865 1051 468 13446 638 420 1047 850 756 616 1008
1963_TCX 3169 2874 4182 2737 828 551 208 2560 10103 204 719 585 499 9158 546 335 598 593 606 418 707
7098_TCX 1354 1803 2369 2134 634 354 245 1829 8322 227 593 371 411 10637 504 294 750 458 367 490 779
ITGAM LPCAT2 LGALS9 GRN MAN2B1 TYROBP CD37 LAIR1 CTSZ CYTH4
1924_TCX 376 649 699 1605 618 392 328 628 1774 484
1926_TCX 225 381 473 1444 597 242 290 321 1110 303
1935_TCX 737 1887 998 2563 856 949 713 1060 2670 569
1925_TCX 634 1323 575 1661 594 562 421 1197 1796 595
1963_TCX 508 696 429 1030 355 556 365 585 1591 360
7098_TCX 418 1011 318 1574 354 353 179 471 1471 321
All of this code is wrapped around the following header in the RMarkdown environment: {r heatmaps, echo=FALSE, results="asis", message=FALSE}.
What I would like to achieve is the two heatmaps side-by-side with black boxes around each individual heat map (i.e. containing the title and legend of the heatmap as well).
If anyone could tell me how to do this, or either one individually it would be greatly appreciated.
Thanks!

search objects with size larger then a threshold

One of class has many object present in .NET heap as discovered through following sos command.
!dumpheap -stat -type MyClass
Statistics:
MT Count TotalSize Class Name
00007ff8e6253494 1700 164123 MyNameSpace.MyClass
I need to find the instances of those objects that have ObjSize greater then 5 MB. I know I can list out objsize of all 1700 instances of MyClass using following.
.foreach (res {!DumpHeap -short -MT 00007ff8e6253494 }) {.if ( (!objsize res) > 41943040) {.echo res; !objsize res}}
With the script above, I don't get any results although there are object instances greater than 5MB. I think problem may be that output of objsize is follows
20288 (0x4f40) bytes
Its a string which make it harder to compare against any threshold. How can I get this script to only list objects that has objsize larger then 5MB?
Creating complex scripts in WinDbg is quite error prone. In such situations, I switch to PyKd, which is a WinDbg extension that uses Python.
In the following, I'll only cover the missing piece in your puzzle, which is the parts that does not work:
.if ( (!objsize res) > 41943040) {.echo res; !objsize res}
Here's my starting point:
0:009> !dumpheap -min 2000
Address MT Size
00000087c6041fe8 000007f81ea5f058 10158
00000087d6021018 000007f81ea3f1b8 8736
00000087d6023658 000007f81ea3f1b8 8192
00000087d6025658 000007f81ea3f1b8 16352
00000087d6029638 000007f81ea3f1b8 32672
You can write a script like this (no error handling!)
from pykd import *
import re
import sys
objsizeStr = dbgCommand("!objsize "+sys.argv[1])
number = re.search("= (.*)\(0x", objsizeStr)
size = int(number.group(1))
if size > 10000:
print sys.argv[1], size
and use it within your loop:
0:009> .foreach (res {!dumpheap -short -min 2000}) { !py c:\tmp\size.py ${res}}
00000087c6041fe8 10160
00000087d6021018 37248
00000087d6023658 27360
00000087d6025658 54488
00000087d6029638 53680
Note how the size of !objsize differs from that of !dumpheap. Just for cross-checking:
0:009> !objsize 00000087d6023658
sizeof(00000087d6023658) = 27360 (0x6ae0) bytes (System.Object[])
See also this answer on how to improve the script using expr() so that you can pass expressions etc. The way I did it now outputs the size in decimal, but that's not explicit. Maybe you want to output a 0n prefix to make it clear.
well as Steve Commented !dumpeap takes a min and max parameter and with those it should be possible to do it natively
0:004> !DumpHeap -type System.String -stat
Statistics:
MT Count TotalSize Class Name
6588199c 1 12 System.Collectionsxxxxx
65454aec 1 48 System.Collectionsxxxxx
65881aa8 1 60 System.Collectionsxxxxx
6587e388 17 596 System.String[]
6587d834 168 5300 System.String
Total 188 objects
0:004> !DumpHeap -type System.String -stat -min 0n64 -max 0n100
Statistics:
MT Count TotalSize Class Name
6587e388 3 212 System.String[]
6587d834 9 684 System.String
Total 12 objects
0:004> !DumpHeap -type System.String -min 0n64 -max 0n100
Address MT Size
01781280 6587d834 76
01781354 6587d834 78
01781478 6587e388 84
017816d8 6587d834 64
01781998 6587d834 78
017819e8 6587d834 70
01781a30 6587d834 82
01782974 6587d834 78
01782a6c 6587d834 90
01782c7c 6587d834 68
01783720 6587e388 64
01783760 6587e388 64
Statistics:
MT Count TotalSize Class Name
6587e388 3 212 System.String[]
6587d834 9 684 System.String
Total 12 objects
manipulating max,min we can fine tune to just one or two objects
an example where we have 1 object extra on upper side and 2 objects extra on lower side
from output preceding this (15 objects versus 12 objects)
0:004> !DumpHeap -type System.String -min 0n62 -max 0n106
Address MT Size
01781280 6587d834 76
01781354 6587d834 78
017813e8 6587d834 62
01781478 6587e388 84
017816d8 6587d834 64
01781898 6587d834 106
01781998 6587d834 78
017819e8 6587d834 70
01781a30 6587d834 82
01782974 6587d834 78
01782a6c 6587d834 90
01782c7c 6587d834 68
01783720 6587e388 64
01783760 6587e388 64
01783e4c 6587d834 62
Statistics:
MT Count TotalSize Class Name
6587e388 3 212 System.String[]
6587d834 12 914 System.String
Total 15 objects
if one needs the address and size both for some reason one could always awk it
0:004> .shell -ci "!DumpHeap -type System.String -min 0n62 -max 0n106" awk "{print $1,$3}"
Address Size
01781280 76
01781354 78
017813e8 62
01781478 84
017816d8 64
01781898 106
01781998 78
017819e8 70
01781a30 82
01782974 78
01782a6c 90
01782c7c 68
01783720 64
01783760 64
01783e4c 62

Creation of a loop loading values from .txt files

i have a problem creating a loop which loads each value from ".txt" files and uses it in some calculations.
All the values are on the 2nd column and the first one is always on the 9th line of each file.
Each ".txt" file contains a different number of values on its 2nd column (they all have the same text after the final value), so i want a loop that can read those values and stop whenever it finds that text)
Here is an example of these files ( the values that interest me are the ones under the headline of G (33,55,93...............,18) )
Latitude: 34°40'30" North,
Longitude: 3°16'6" East
Results for: April
Inclination of plane: 32 deg.
Orientation (azimuth) of plane: 0 deg.
Time G Gd Gc DNI DNIc A Ad Ac
05:52 33 33 25 0 0 233 64 311
06:07 55 44 47 246 361 356 105 473
06:22 93 59 92 312 459 444 124 590
06:37 136 73 147 366 538 514 138 684
06:52 183 86 207 410 602 572 150 760
07:07 232 98 271 447 656 620 160 823
07:22 283 110 337 478 701 659 168 874
16:37 283 110 337 478 701 659 168 874
16:52 232 98 271 447 656 620 160 823
17:07 183 86 207 410 602 572 150 760
17:22 136 73 147 366 538 514 138 684
17:37 93 59 92 312 459 444 124 590
17:52 55 44 47 246 361 356 105 473
18:07 33 33 25 0 0 233 64 311
18:22 18 18 14 0 0 9 8 7
G: Global irradiance on a fixed plane (W/m2)
Gd: Diffuse irradiance on a fixed plane (W/m2)
Gc: Global clear-sky irradiance on a fixed plane (W/m2)
DNI: Direct normal irradiance (W/m2)
DNIc: Clear-sky direct normal irradiance (W/m2)
A: Global irradiance on 2-axis tracking plane (W/m2)
Ad: Diffuse irradiance on 2-axis tracking plane (W/m2)
Ac: Global clear-sky irradiance on 2-axis tracking plane (W/m2)
PVGIS (c) European Communities, 2001-2012

Azure Virtual Machine Disk IOPS Performance vs AWS

I have a MongoDB replica set with approx. 200GB of data.
This currently exists in AWS on two medium.m3 instances (1 core, 3.7GB). I have a requirement to move this to Azure A2 instances (2 core, 3.5GB), however I am concerned about the performance.
In AWS I have a single disk per machine, 220GB SSD through EBS which delivers 660IOPS (or whatever this means in AWS speak).
According to Azure, I should get 500 IOPS per disk, so I thought performance would be comparable, however here are the results of mongoperf on Azure:
Azure mongoperf Output:
{ nThreads: 2, fileSizeMB: 1000, r: true }
creating test file size:1000MB ...
testing...
optoins:{ nThreads: 2, fileSizeMB: 1000, r: true }
wthr 2
new thread, total running : 1
read:1 write:0
64 ops/sec 0 MB/sec
82 ops/sec 0 MB/sec
85 ops/sec 0 MB/sec
111 ops/sec 0 MB/sec
95 ops/sec 0 MB/sec
106 ops/sec 0 MB/sec
96 ops/sec 0 MB/sec
112 ops/sec 0 MB/sec
new thread, total running : 2
read:1 write:0
188 ops/sec 0 MB/sec
195 ops/sec 0 MB/sec
223 ops/sec 0 MB/sec
137 ops/sec 0 MB/sec
222 ops/sec 0 MB/sec
212 ops/sec 0 MB/sec
200 ops/sec 0 MB/sec
Whilst my AWS medium.m3 instances perform totally different:
AWS mongoperf Output:
{ nThreads: 2, fileSizeMB: 1000, r: true }
creating test file size:1000MB ...
testing...
optoins:{ nThreads: 2, fileSizeMB: 1000, r: true }
wthr 2
new thread, total running : 1
read:1 write:0
3149 ops/sec 12 MB/sec
3169 ops/sec 12 MB/sec
3071 ops/sec 11 MB/sec
3044 ops/sec 11 MB/sec
2688 ops/sec 10 MB/sec
2880 ops/sec 11 MB/sec
3039 ops/sec 11 MB/sec
3020 ops/sec 11 MB/sec
new thread, total running : 2
read:1 write:0
3133 ops/sec 12 MB/sec
3044 ops/sec 11 MB/sec
3052 ops/sec 11 MB/sec
3016 ops/sec 11 MB/sec
2928 ops/sec 11 MB/sec
3041 ops/sec 11 MB/sec
3061 ops/sec 11 MB/sec
3025 ops/sec 11 MB/sec
How can I achieve the same performance through Azure as I do through AWS? I have looked at the D* instances which provide local SSD storage of 500GB, however these disks are ephemeral and so no good for hosting my database.
Edit: I can see that I can attach additional Premium Storage drives to the D* instances, however the costs for these are massive compared to AWS, looks like for high performance IO you still cannot beat AWS costs wise.
The approach I have taken towards this is to attach the maximum drives the server can support, for an A2 Standard this is 4 drives.
I have 4 x 200GB drives, these placed in a RAID0 array giving ~800GB storage.
The RAID0 allows me to combine the 500 IOPS total into 2000 IOPS theoretical max.
This now results in the following speed on the A2 machine from mongoperf, for some reason the single threaded performance is very low, including the test file write which happens at only 150 IOPS . At 10 threads the speed is exceeding the AWS instances, however I'm unsure if there is some kind of readahead caching going on here in Azure that would not apply in a real DB scenario. Performance on AWS does not alter with increased thread count.
Azure Performance:
{ nThreads: 10, fileSizeMB: 1000, r: true }
creating test file size:1000MB ...
testing...
optoins:{ nThreads: 10, fileSizeMB: 1000, r: true }
wthr 10
new thread, total running : 1
read:1 write:0
125 ops/sec 0 MB/sec
194 ops/sec 0 MB/sec
174 ops/sec 0 MB/sec
213 ops/sec 0 MB/sec
138 ops/sec 0 MB/sec
117 ops/sec 0 MB/sec
174 ops/sec 0 MB/sec
92 ops/sec 0 MB/sec
new thread, total running : 2
read:1 write:0
354 ops/sec 1 MB/sec
359 ops/sec 1 MB/sec
322 ops/sec 1 MB/sec
408 ops/sec 1 MB/sec
440 ops/sec 1 MB/sec
265 ops/sec 1 MB/sec
472 ops/sec 1 MB/sec
484 ops/sec 1 MB/sec
new thread, total running : 4
read:1 write:0
read:1 write:0
984 ops/sec 3 MB/sec
915 ops/sec 3 MB/sec
1419 ops/sec 5 MB/sec
1669 ops/sec 6 MB/sec
1934 ops/sec 7 MB/sec
1660 ops/sec 6 MB/sec
1348 ops/sec 5 MB/sec
1735 ops/sec 6 MB/sec
new thread, total running : 8
read:1 write:0
read:1 write:0
read:1 write:0
read:1 write:0
4041 ops/sec 15 MB/sec
5370 ops/sec 20 MB/sec
5643 ops/sec 22 MB/sec
5639 ops/sec 22 MB/sec
4388 ops/sec 17 MB/sec
6093 ops/sec 23 MB/sec
6350 ops/sec 24 MB/sec
6961 ops/sec 27 MB/sec
new thread, total running : 10
read:1 write:0
read:1 write:0
9684 ops/sec 37 MB/sec
11528 ops/sec 45 MB/sec
13807 ops/sec 53 MB/sec
16666 ops/sec 65 MB/sec
16306 ops/sec 63 MB/sec
24292 ops/sec 94 MB/sec
24264 ops/sec 94 MB/sec
19358 ops/sec 75 MB/sec
28067 ops/sec 109 MB/sec
43151 ops/sec 168 MB/sec
45165 ops/sec 176 MB/sec
44847 ops/sec 175 MB/sec
43806 ops/sec 171 MB/sec
43103 ops/sec 168 MB/sec
43477 ops/sec 169 MB/sec
44651 ops/sec 174 MB/sec
45365 ops/sec 177 MB/sec
41495 ops/sec 162 MB/sec
45281 ops/sec 176 MB/sec
47014 ops/sec 183 MB/sec
46056 ops/sec 179 MB/sec
45418 ops/sec 177 MB/sec
42363 ops/sec 165 MB/sec
43974 ops/sec 171 MB/sec
At the end with the high read IO this gives me very odd numbers from iostat:
Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn
sda 0.00 0.00 0.00 0 0
sdb 0.00 0.00 0.00 0 0
sdd 10885.07 43540.30 0.00 87516 0
sde 10958.21 43832.84 0.00 88104 0
sdf 10960.70 43842.79 0.00 88124 0
sdc 10920.40 43681.59 0.00 87800 0
md127 43722.89 174891.54 0.00 351532 0
However: When I do a mongoperf with reads AND writes, performs falls off a cliff, AWS speeds still remain identical.
Azure with read and write mongoperf
new thread, total running : 10
read:1 write:1
read:1 write:1
126 ops/sec 0 MB/sec
84 ops/sec 0 MB/sec
150 ops/sec 0 MB/sec
123 ops/sec 0 MB/sec
84 ops/sec 0 MB/sec
190 ops/sec 0 MB/sec
179 ops/sec 0 MB/sec
108 ops/sec 0 MB/sec
171 ops/sec 0 MB/sec
192 ops/sec 0 MB/sec
152 ops/sec 0 MB/sec
103 ops/sec 0 MB/sec
163 ops/sec 0 MB/sec
116 ops/sec 0 MB/sec
121 ops/sec 0 MB/sec
76 ops/sec 0 MB/sec