Emacs regular freezes: How to identify cause? - emacs

I'm having regular freezes in Emacs (every 15 Seconds and then for 2 to 3 Seconds).
I stated the Profiler with CPU:
- ... 4588 44%
Automatic GC 4588 44%
+ redisplay_internal (C function) 3405 33%
+ command-execute 1887 18%
+ timer-event-handler 384 3%
+ flyspell-post-command-hook 18 0%
+ undo-auto--add-boundary 8 0%
+ internal-echo-keystrokes-prefix 8 0%
+ delete-selection-pre-hook 7 0%
+ eldoc-schedule-timer 3 0%
tooltip-hide 3 0%
internal-timer-start-idle 2 0%
+ gui-set-selection 2 0%
+ global-visual-line-mode-check-buffers 1 0%
+ deactivate-mark 1 0%
How can i proceed from here to identify the root cause.
This kind of freeze is annoying when typing longer code or text.
Any help is highly appreciated.
Thanks

Related

Capacity Consumption Power BI

I'm trying to come up with a chart visualization like this in power BI:
X axis - time (month);
Y axis - % of consumption and remaining %;
Would it be possible to do?
Also I'm in doubt how can I create a table to represent this goal, was thinking something like this:
New column New column New column New column New column New column
Date Consumption Capacity Remaining % Com %Rem %Extra needed %Com accum
01/07/2022 50 150 100 33% 67% 0% 33%
20/07/2022 20 150 80 13% 53% 0% 47%
30/07/2022 10 150 70 7% 47% 0% 53%
04/08/2022 40 150 30 27% 20% 0% 80%
05/08/2022 35 150 -5 23% -3% 3% 103%
10/08/2022 10 150 -15 7% -10% 10% 110%
Althought I am not seeing a way to calculate the last column %Com accum, being the accumulated of the sum of the new %Com with the previous from the actual row colum (since it's row calculation)
for the last row: 103% (previous %Com accum) + 7% (%Com) = 110%
also the available capacity is function of the time period we are in also, could be quarters or semesters, for example,
1Q 150, 2Q 200, 3Q, 150, 4Q, 200
1H 350, 2H 350
This can be achieved using the Deneb custom visual. https://deneb-viz.github.io/

Ceph OSDs are full, but I have not stored that much of data

I have a Ceph cluster running with 18 X 600GB OSDs. There are three pools (size:3, pg_num:64) with an image size of 200GB on each, and there are 6 servers connected to these images via iSCSI and storing about 20 VMs on them. Here is the output of "ceph df":
POOLS:
POOL ID STORED OBJECTS USED %USED MAX AVAIL
cephfs_data 1 0 B 0 0 B 0 0 B
cephfs_metadata 2 17 KiB 22 1.5 MiB 100.00 0 B
defaults.rgw.buckets.data 3 0 B 0 0 B 0 0 B
defaults.rgw.buckets.index 4 0 B 0 0 B 0 0 B
.rgw.root 5 2.0 KiB 5 960 KiB 100.00 0 B
default.rgw.control 6 0 B 8 0 B 0 0 B
default.rgw.meta 7 393 B 2 384 KiB 100.00 0 B
default.rgw.log 8 0 B 207 0 B 0 0 B
rbd 9 150 GiB 38.46k 450 GiB 100.00 0 B
rbd3 13 270 GiB 69.24k 811 GiB 100.00 0 B
rbd2 14 150 GiB 38.52k 451 GiB 100.00 0 B
Based on this, I expect about 1.7 TB RAW capacity usage, BUT it is currently about 9TBs!
RAW STORAGE:
CLASS SIZE AVAIL USED RAW USED %RAW USED
hdd 9.8 TiB 870 GiB 9.0 TiB 9.0 TiB 91.35
TOTAL 9.8 TiB 870 GiB 9.0 TiB 9.0 TiB 91.35
And the cluster is down because there is very few capacity remained. I wonder what makes this and how can I get it fixed.
Your help is much appreciated
The problem was mounting the iSCSI target without discard option.
Since I am using RedHat Virtualization, I just modified all storage domains created on top of Ceph, and enabled "discard" on them1. Just after a few hours, about 1 TB of storage released. Now it is about 12 hours passed and 5 TB of storage is released.
Thanks

what are using large mem_reserve in a .net application?

We have an asp.net web service application that is having OutOfMemory issue with IIS w3wp.exe. We got mini memory dump of w3wp when OOM happened, and we see it has large mem_reserve but small managed heap size. Question is how can we find out what is using mem_reserve (virtual address space). I expect to see some mem_reserve but not 2.9G. Since mem_reserve doesn't add up to eeheap, would it be from unmanaged heap? then anyway to confirm that?
0:000> !address -summary
Mapping file section regions...
Mapping module regions...
Mapping PEB regions...
Mapping TEB and stack regions...
Mapping heap regions...
Mapping page heap regions...
Mapping other regions...
Mapping stack trace database regions...
Mapping activation context regions...
--- Usage Summary ---------------- RgnCount ----------- Total Size -------- %ofBusy %ofTotal
<unknown> 1664 d4c1d000 ( 3.324 GB) 92.34% 83.11%
Free 348 19951000 ( 409.316 MB) 9.99%
Image 1184 ad58000 ( 173.344 MB) 4.70% 4.23%
Heap 75 5ab4000 ( 90.703 MB) 2.46% 2.21%
Stack 193 1200000 ( 18.000 MB) 0.49% 0.44%
TEB 64 40000 ( 256.000 kB) 0.01% 0.01%
Other 10 35000 ( 212.000 kB) 0.01% 0.01%
PEB 1 1000 ( 4.000 kB) 0.00% 0.00%
--- Type Summary (for busy) ------ RgnCount ----------- Total Size -------- %ofBusy %ofTotal
MEM_PRIVATE 1323 d6f04000 ( 3.358 GB) 93.28% 83.96%
MEM_IMAGE 1828 dd75000 ( 221.457 MB) 6.01% 5.41%
MEM_MAPPED 40 1a26000 ( 26.148 MB) 0.71% 0.64%
--- State Summary ---------------- RgnCount ----------- Total Size -------- %ofBusy %ofTotal
MEM_RESERVE 746 b8b4e000 ( 2.886 GB) 80.16% 72.15%
MEM_COMMIT 2445 2db51000 ( 731.316 MB) 19.84% 17.85%
MEM_FREE 348 19951000 ( 409.316 MB) 9.99%
--- Protect Summary (for commit) - RgnCount ----------- Total Size -------- %ofBusy %ofTotal
PAGE_READWRITE 1024 1f311000 ( 499.066 MB) 13.54% 12.18%
PAGE_EXECUTE_READ 248 b2e6000 ( 178.898 MB) 4.85% 4.37%
PAGE_READONLY 613 1c1d000 ( 28.113 MB) 0.76% 0.69%
PAGE_WRITECOPY 219 103e000 ( 16.242 MB) 0.44% 0.40%
PAGE_EXECUTE_READWRITE 144 59c000 ( 5.609 MB) 0.15% 0.14%
PAGE_EXECUTE_WRITECOPY 69 21f000 ( 2.121 MB) 0.06% 0.05%
PAGE_READWRITE|PAGE_GUARD 128 144000 ( 1.266 MB) 0.03% 0.03%
--- Largest Region by Usage ----------- Base Address -------- Region Size ----------
<unknown> c2010000 4032000 ( 64.195 MB)
Free 86010000 4000000 ( 64.000 MB)
Image 72f48000 f5d000 ( 15.363 MB)
Heap 64801000 fcf000 ( 15.809 MB)
Stack 1ed0000 7a000 ( 488.000 kB)
TEB ffe8a000 1000 ( 4.000 kB)
Other fffb0000 23000 ( 140.000 kB)
PEB fffde000 1000 ( 4.000 kB)
0:000> !ao
---------Heap 1 ---------
Managed OOM occured after GC #2924 (Requested to allocate 0 bytes)
Reason: Didn't have enough memory to allocate an LOH segment
Detail: LOH: Failed to reserve memory (117440512 bytes)
---------Heap 7 ---------
Managed OOM occured after GC #2308 (Requested to allocate 0 bytes)
Reason: Could not do a full GC
0:000> !vmstat
TYPE MINIMUM MAXIMUM AVERAGE BLK COUNT TOTAL
~~~~ ~~~~~~~ ~~~~~~~ ~~~~~~~ ~~~~~~~~~ ~~~~~
Free:
Small 4K 64K 34K 219 7,475K
Medium 72K 832K 273K 80 21,903K
Large 1,080K 65,536K 7,954K 49 389,759K
Summary 4K 65,536K 1,204K 348 419,139K
Reserve:
Small 4K 64K 14K 436 6,519K
Medium 88K 1,020K 303K 169 51,303K
Large 1,088K 65,528K 21,052K 141 2,968,407K
Summary 4K 65,528K 4,056K 746 3,026,231K
Commit:
Small 4K 64K 16K 2,019 33,386K
Medium 68K 1,024K 287K 275 79,043K
Large 1,028K 65,736K 7,315K 87 636,435K
Summary 4K 65,736K 314K 2,381 748,866K
Private:
Small 4K 64K 28K 851 23,911K
Medium 88K 1,024K 313K 222 69,687K
Large 1,028K 65,736K 18,429K 186 3,427,951K
Summary 4K 65,736K 2,797K 1,259 3,521,550K
Mapped:
Small 4K 64K 18K 23 431K
Medium 68K 1,004K 385K 11 4,239K
Large 1,520K 6,640K 3,684K 6 22,104K
Summary 4K 6,640K 669K 40 26,775K
Image:
Small 4K 64K 9K 1,581 15,562K
Medium 68K 1,000K 267K 211 56,419K
Large 1,028K 15,732K 4,299K 36 154,787K
Summary 4K 15,732K 124K 1,828 226,771K
0:000> !eeheap -gc
Number of GC Heaps: 8
------------------------------
Heap 0 (01cd1720)
generation 0 starts at 0x44470de4
generation 1 starts at 0x43f51000
generation 2 starts at 0x02161000
ephemeral segment allocation context: none
segment begin allocated size
02160000 02161000 02dee4a8 0xc8d4a8(13161640)
43f50000 43f51000 444b9364 0x568364(5669732)
Large object heap starts at 0x12161000
segment begin allocated size
12160000 12161000 121bd590 0x5c590(378256)
c2010000 c2011000 c6041020 0x4030020(67305504)
Heap Size: Size: 0x5281dbc (86515132) bytes.
------------------------------
Heap 1 (01cd65a8)
generation 0 starts at 0x8c4b38e0
generation 1 starts at 0x8c011000
generation 2 starts at 0x04161000
ephemeral segment allocation context: none
segment begin allocated size
04160000 04161000 054114f0 0x12b04f0(19596528)
25c40000 25c41000 25fc7328 0x386328(3695400)
8c010000 8c011000 8c4c4778 0x4b3778(4929400)
Large object heap starts at 0x13161000
segment begin allocated size
13160000 13161000 13161010 0x10(16)
Heap Size: Size: 0x1ae9fa0 (28221344) bytes.
------------------------------
Heap 2 (01cdb5c0)
generation 0 starts at 0x4a71a420
generation 1 starts at 0x49f51000
generation 2 starts at 0x06161000
ephemeral segment allocation context: none
segment begin allocated size
06160000 06161000 06e89b18 0xd28b18(13798168)
27c40000 27c41000 27f8f6dc 0x34e6dc(3466972)
9a010000 9a011000 9a5900b8 0x57f0b8(5763256)
49f50000 49f51000 4a72cc2c 0x7dbc2c(8240172)
Large object heap starts at 0x14161000
segment begin allocated size
14160000 14161000 14161010 0x10(16)
Heap Size: Size: 0x1dd1ee8 (31268584) bytes.
------------------------------
Heap 3 (01ce05d8)
generation 0 starts at 0x34fbe540
generation 1 starts at 0x34d11000
generation 2 starts at 0x08161000
ephemeral segment allocation context: none
segment begin allocated size
08160000 08161000 08c26248 0xac5248(11293256)
2bc40000 2bc41000 2bfc9294 0x388294(3703444)
34d10000 34d11000 34fcdbb0 0x2bcbb0(2870192)
Large object heap starts at 0x15161000
segment begin allocated size
15160000 15161000 151c3b10 0x62b10(404240)
Heap Size: Size: 0x116cb9c (18271132) bytes.
------------------------------
Heap 4 (01ce55f0)
generation 0 starts at 0x934a7cec
generation 1 starts at 0x93011000
generation 2 starts at 0x0a161000
ephemeral segment allocation context: none
segment begin allocated size
0a160000 0a161000 0b4bf898 0x135e898(20310168)
45f50000 45f51000 45f70df4 0x1fdf4(130548)
93010000 93011000 934b83ec 0x4a73ec(4879340)
Large object heap starts at 0x16161000
segment begin allocated size
16160000 16161000 161ab050 0x4a050(303184)
Heap Size: Size: 0x186fac8 (25623240) bytes.
------------------------------
Heap 5 (01cec608)
generation 0 starts at 0x31f1d4d0
generation 1 starts at 0x31c41000
generation 2 starts at 0x0c161000
ephemeral segment allocation context: none
segment begin allocated size
0c160000 0c161000 0d2120b0 0x10b10b0(17502384)
7aea0000 7aea1000 7af5c074 0xbb074(766068)
31c40000 31c41000 31f2caac 0x2ebaac(3062444)
Large object heap starts at 0x17161000
segment begin allocated size
17160000 17161000 17179ad0 0x18ad0(101072)
Heap Size: Size: 0x14706a0 (21431968) bytes.
------------------------------
Heap 6 (01cf5a20)
generation 0 starts at 0x5fbb5598
generation 1 starts at 0x5f821000
generation 2 starts at 0x0e161000
ephemeral segment allocation context: none
segment begin allocated size
0e160000 0e161000 0eb22284 0x9c1284(10228356)
5f820000 5f821000 5fbc9838 0x3a8838(3835960)
Large object heap starts at 0x18161000
segment begin allocated size
18160000 18161000 18161010 0x10(16)
Heap Size: Size: 0xd69acc (14064332) bytes.
------------------------------
Heap 7 (01cfaa38)
generation 0 starts at 0xad530e00
generation 1 starts at 0xad011000
generation 2 starts at 0x10161000
ephemeral segment allocation context: none
segment begin allocated size
10160000 10161000 109bd218 0x85c218(8765976)
a4010000 a4011000 a42f418c 0x2e318c(3027340)
41f50000 41f51000 42a99090 0xb48090(11829392)
ad010000 ad011000 ad5399a0 0x5289a0(5409184)
Large object heap starts at 0x19161000
segment begin allocated size
19160000 19161000 1980ddb0 0x6acdb0(6999472)
Heap Size: Size: 0x225cb84 (36031364) bytes.
------------------------------
GC Heap Size: Size: 0xf950f98 (261427096) bytes.
0:000> !heap -s
SEGMENT HEAP ERROR: failed to initialize the extention
LFH Key : 0x3a251113
Termination on corruption : ENABLED
Heap Flags Reserv Commit Virt Free List UCR Virt Lock Fast
(k) (k) (k) (k) length blocks cont. heap
-----------------------------------------------------------------------------
Virtual block: 1a830000 - 1a830000 (size 00000000)
Virtual block: 1a9d0000 - 1a9d0000 (size 00000000)
007b0000 00000002 16384 15520 16384 198 418 5 2 3 LFH
00280000 00001002 3136 1608 3136 7 23 3 0 0 LFH
00a40000 00001002 1088 828 1088 79 25 2 0 0 LFH
00c30000 00001002 1088 308 1088 13 5 2 0 0 LFH
00100000 00041002 256 4 256 2 1 1 0 0
01560000 00001002 256 24 256 3 4 1 0 0
01510000 00001002 64 4 64 2 1 1 0 0
01670000 00001002 64 4 64 2 1 1 0 0
00290000 00011002 256 4 256 1 2 1 0 0
01870000 00001002 256 8 256 3 3 1 0 0
004e0000 00001002 256 4 256 1 2 1 0 0
019e0000 00001002 256 4 256 2 1 1 0 0
01af0000 00001002 256 4 256 2 1 1 0 0
02080000 00001002 1280 396 1280 1 34 2 0 0 LFH
01fc0000 00041002 256 4 256 2 1 1 0 0
1b700000 00041002 256 256 256 6 4 1 0 0 LFH
21840000 00001002 64192 1880 64192 1639 13 12 0 0 LFH
External fragmentation 87 % (13 free blocks)
-----------------------------------------------------------------------------
If we recycle w3wp, then all memory will go back to system. So there seems no memory leak. I understand that mem_reserve is mapped to virtual address, and are allocated space but not committed. Question is: is there a way to figure out what is using up all these virtual address.
I have memory dump, but I don't have access the the server to get performance counter or something.
Thanks folks!!

What is the difference bewteen SWAP and Virtual mmeory in Linux

Why I am not seeing any SWAP usage in below scenario, even though the virtual memory is showing 20gig ?
Tasks: 186 total, 1 running, 185 sleeping, 0 stopped, 0 zombie
Cpu(s): 0.2%us, 0.2%sy, 0.0%ni, 99.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 32880880k total, 17555744k used, 15325136k free, 300268k buffers
Swap: 16383996k total, 0k used, 16383996k free, 2287700k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
10314 abcuser 20 0 20.3g 6.8g 25m S 0.7 21.8 23:47.38 java
25119 abcuser 20 0 20.3g 6.7g 25m S 0.3 21.4 23:38.25 java
total used free shared buffers cached
Mem: 31 16 14 0 0 2
-/+ buffers/cache: 14 17
Swap: 15 0 15
Total: 46 16 30
Regards
Sanjeev

CEPH raw space usage

I can't understand, where my ceph raw space is gone.
cluster 90dc9682-8f2c-4c8e-a589-13898965b974
health HEALTH_WARN 72 pgs backfill; 26 pgs backfill_toofull; 51 pgs backfilling; 141 pgs stuck unclean; 5 requests are blocked > 32 sec; recovery 450170/8427917 objects degraded (5.341%); 5 near full osd(s)
monmap e17: 3 mons at {enc18=192.168.100.40:6789/0,enc24=192.168.100.43:6789/0,enc26=192.168.100.44:6789/0}, election epoch 734, quorum 0,1,2 enc18,enc24,enc26
osdmap e3326: 14 osds: 14 up, 14 in
pgmap v5461448: 1152 pgs, 3 pools, 15252 GB data, 3831 kobjects
31109 GB used, 7974 GB / 39084 GB avail
450170/8427917 objects degraded (5.341%)
18 active+remapped+backfill_toofull
1011 active+clean
64 active+remapped+wait_backfill
8 active+remapped+wait_backfill+backfill_toofull
51 active+remapped+backfilling
recovery io 58806 kB/s, 14 objects/s
OSD tree (each host has 2 OSD):
# id weight type name up/down reweight
-1 36.45 root default
-2 5.44 host enc26
0 2.72 osd.0 up 1
1 2.72 osd.1 up 0.8227
-3 3.71 host enc24
2 0.99 osd.2 up 1
3 2.72 osd.3 up 1
-4 5.46 host enc22
4 2.73 osd.4 up 0.8
5 2.73 osd.5 up 1
-5 5.46 host enc18
6 2.73 osd.6 up 1
7 2.73 osd.7 up 1
-6 5.46 host enc20
9 2.73 osd.9 up 0.8
8 2.73 osd.8 up 1
-7 0 host enc28
-8 5.46 host archives
12 2.73 osd.12 up 1
13 2.73 osd.13 up 1
-9 5.46 host enc27
10 2.73 osd.10 up 1
11 2.73 osd.11 up 1
Real usage:
/dev/rbd0 14T 7.9T 5.5T 59% /mnt/ceph
Pool size:
osd pool default size = 2
Pools:
ceph osd lspools
0 data,1 metadata,2 rbd,
rados df
pool name category KB objects clones degraded unfound rd rd KB wr wr KB
data - 0 0 0 0 0 0 0 0 0
metadata - 0 0 0 0 0 0 0 0 0
rbd - 15993591918 3923880 0 444545 0 82936 1373339 2711424 849398218
total used 32631712348 3923880
total avail 8351008324
total space 40982720672
Raw usage is 4x real usage. As I understand, it must be 2x ?
Yes, it must be 2x. I don't really shure, that the real raw usage is 7.9T. Why do you check this value on mapped disk?
This are my pools:
pool name KB objects clones degraded unfound rd rd KB wr wr KB
admin-pack 7689982 1955 0 0 0 693841 3231750 40068930 353462603
public-cloud 105432663 26561 0 0 0 13001298 638035025 222540884 3740413431
rbdkvm_sata 32624026697 7968550 31783 0 0 4950258575 232374308589 12772302818 278106113879
total used 98289353680 7997066
total avail 34474223648
total space 132763577328
You can see, that the total amount of used space is 3 times more than the used space in the pool rbdkvm_sata (+-).
ceph -s shows the same result too:
pgmap v11303091: 5376 pgs, 3 pools, 31220 GB data, 7809 kobjects
93736 GB used, 32876 GB / 123 TB avail
I don't think you have just one rbd image. The result of "ceph osd lspools" indicated that you had 3 pools and one of pools had name "metadata".(Maybe you were using cephfs). /dev/rbd0 was appeared because you mapped the image but you could have other images also. To list the images you can use "rbd list -p ". You can see the image info with "rbd info -p "