solaris zfs pool import crash - solaris

I do run my nas on solaris 11.3. I had a system crash a few days ago. Reason unknown. Afterwards I wasn't able to import the pool anymore. I exported the pool already so that I can boot without a new crash. The pool isn't the rpool. But I do need the data. Otherwise 20 years of children growing are gone. I checked the backup but this stopped a few months ago due to space limits.
So status
zpool import shows a healthy pool
root#solaris:~# zpool import
pool: mediapool1
id: 8470162457149274931
state: ONLINE
status: The pool was last accessed by another system.
action: The pool can be imported using its name or numeric identifier and
the '-f' flag.
see: http://support.oracle.com/msg/ZFS-8000-EY
config:
mediapool1 ONLINE
raidz1-0 ONLINE
c0t5000C500A2C3F5D9d0 ONLINE
c0t5000C500A2C40ECEd0 ONLINE
c0t5000C500A2C4122Bd0 ONLINE
but as soon as I do the zpool import mediapool I run into a kernel oops.
I exchanged the whole system. Sas controller, proz, mainboard, ram, Os (Solaris 11.4), powersupply. Nothing changed.
zdb -e -dddd
root#solaris:~# zdb -e -dddd mediapool1
Dataset mediapool1 [ZPL], ID 18, cr_txg 1, 224K, 10 objects, rootbp DVA[0]=<0:903a3a5c000:4000:RZM:3> [L0 DMU objset] fletcher4 lzjb LE unique unencrypted size=800L/200P birth=10836736L/10836736P fill=10 contiguous 3-copy cksum=1416247ffb:6ec9c299cf2:1423227024d05:28fa8c22fe2105
Deadlist: 0 (0/0 comp)
mintxg 0 -> obj 21
Object lvl iblk dblk dsize lsize %full type
0 7 16K 16K 74.5K 16K 31.25 DMU dnode
dnode flags: USED_BYTES
dnode maxblkid: 0
Object lvl iblk dblk dsize lsize %full type
-1 1 16K 512 10.5K 512 100.00 ZFS user/group used
dnode flags: USED_BYTES
dnode maxblkid: 0
microzap: 512 bytes, 1 entries
0 = 0xdf20
Object lvl iblk dblk dsize lsize %full type
-2 1 16K 512 10.5K 512 100.00 ZFS user/group used
dnode flags: USED_BYTES
dnode maxblkid: 0
microzap: 512 bytes, 1 entries
0 = 0xdf20
Object lvl iblk dblk dsize lsize %full type
1 1 16K 512 10.5K 512 100.00 ZFS master node
dnode flags: USED_BYTES USERUSED_ACCOUNTED
dnode maxblkid: 0
microzap: 512 bytes, 7 entries
ROOT = 0x4
SA_ATTRS = 0x2
casesensitivity = 0x2
VERSION = 0x6
DELETE_QUEUE = 0x3
SHARES = 0x7
normalization = 0
Object lvl iblk dblk dsize lsize %full type
2 1 16K 512 10.5K 512 100.00 SA master node
dnode flags: USED_BYTES USERUSED_ACCOUNTED
dnode maxblkid: 0
microzap: 512 bytes, 2 entries
LAYOUTS = 0x6
REGISTRY = 0x5
Object lvl iblk dblk dsize lsize %full type
3 1 16K 512 10.5K 512 100.00 ZFS delete queue
dnode flags: USED_BYTES USERUSED_ACCOUNTED
dnode maxblkid: 0
microzap: 512 bytes, 0 entries
Object lvl iblk dblk dsize lsize %full type
4 1 16K 512 10.5K 512 100.00 ZFS directory
168 bonus System attributes
dnode flags: USED_BYTES USERUSED_ACCOUNTED
dnode maxblkid: 0
path /
uid 0
gid 0
atime Tue Jun 20 09:39:20 2017
mtime Sat Jun 17 14:02:19 2017
ctime Sat Jun 17 14:02:19 2017
crtime Wed Jun 14 08:43:52 2017
gen 4
mode 040755
size 5
parent 4
links 5
pflags 0x40800000344
microzap: 512 bytes, 3 entries
backup = 11 (type: Directory)
fs_userhome = 10 (type: Directory)
export = 12 (type: Directory)
Object lvl iblk dblk dsize lsize %full type
5 1 16K 1.50K 10.5K 1.50K 100.00 SA attr registration
dnode flags: USED_BYTES USERUSED_ACCOUNTED
dnode maxblkid: 0
microzap: 1536 bytes, 21 entries
ZPL_ZNODE_ACL = 0x5803000f : [88:3:15]
ZPL_RDEV = 0x800000a : [8:0:10]
ZPL_LINKS = 0x8000008 : [8:0:8]
ZPL_ATIME = 0x10000000 : [16:0:0]
ZPL_GID = 0x800000d : [8:0:13]
ZPL_PAD = 0x2000000e : [32:0:14]
ZPL_SCANSTAMP = 0x20030012 : [32:3:18]
ZPL_SIZE = 0x8000006 : [8:0:6]
ZPL_UID = 0x800000c : [8:0:12]
ZPL_CTIME = 0x10000002 : [16:0:2]
ZPL_MAC_LABEL = 0x30014 : [0:3:20]
ZPL_MTIME = 0x10000001 : [16:0:1]
ZPL_MODE = 0x8000005 : [8:0:5]
ZPL_FLAGS = 0x800000b : [8:0:11]
ZPL_PARENT = 0x8000007 : [8:0:7]
ZPL_DACL_ACES = 0x40013 : [0:4:19]
ZPL_CRTIME = 0x10000003 : [16:0:3]
ZPL_SYMLINK = 0x30011 : [0:3:17]
ZPL_GEN = 0x8000004 : [8:0:4]
ZPL_DACL_COUNT = 0x8000010 : [8:0:16]
ZPL_XATTR = 0x8000009 : [8:0:9]
Object lvl iblk dblk dsize lsize %full type
6 1 16K 16K 21.5K 32K 100.00 SA attr layouts
dnode flags: USED_BYTES USERUSED_ACCOUNTED
dnode maxblkid: 1
Fat ZAP stats:
Pointer table:
1024 elements
zt_blk: 0
zt_numblks: 0
zt_shift: 10
zt_blks_copied: 0
zt_nextblk: 0
ZAP entries: 1
Leaf blocks: 1
Total blocks: 2
zap_block_type: 0x8000000000000001
zap_magic: 0x2f52ab2ab
zap_salt: 0x32d772cd
Leafs with 2^n pointers:
9: 1 *
Blocks with n*5 entries:
0: 1 *
Blocks n/10 full:
1: 1 *
Entries with n chunks:
4: 1 *
Buckets with n entries:
0: 511 ****************************************
1: 1 *
2 = [ 5 6 4 12 13 7 11 0 1 2 3 8 16 19 ]
Object lvl iblk dblk dsize lsize %full type
7 1 16K 512 10.5K 512 100.00 ZFS directory
168 bonus System attributes
dnode flags: USED_BYTES USERUSED_ACCOUNTED
dnode maxblkid: 0
path /.zfs/shares
uid 0
gid 0
atime Wed Jun 14 08:43:53 2017
mtime Wed Jun 14 08:43:52 2017
ctime Wed Jun 14 08:43:52 2017
crtime Wed Jun 14 08:43:52 2017
gen 4
mode 040555
size 2
parent 7
links 2
pflags 0x40800000344
microzap: 512 bytes, 0 entries
Object lvl iblk dblk dsize lsize %full type
10 1 16K 512 10.5K 512 100.00 ZFS directory
168 bonus System attributes
dnode flags: USED_BYTES USERUSED_ACCOUNTED
dnode maxblkid: 0
path /fs_userhome
uid 0
gid 0
atime Fri Dec 21 06:21:30 2018
mtime Fri Jun 16 13:07:27 2017
ctime Fri Jun 16 13:07:27 2017
crtime Fri Jun 16 13:07:27 2017
gen 20775
mode 040755
size 2
parent 4
links 2
pflags 0x40800000344
microzap: 512 bytes, 0 entries
Object lvl iblk dblk dsize lsize %full type
11 1 16K 512 10.5K 512 100.00 ZFS directory
168 bonus System attributes
dnode flags: USED_BYTES USERUSED_ACCOUNTED
dnode maxblkid: 0
path /backup
uid 0
gid 0
atime Fri Dec 21 06:21:30 2018
mtime Sat Jun 17 13:47:04 2017
ctime Sat Jun 17 13:47:04 2017
crtime Sat Jun 17 13:47:04 2017
gen 39220
mode 040755
size 2
parent 4
links 2
pflags 0x40800000344
microzap: 512 bytes, 0 entries
Object lvl iblk dblk dsize lsize %full type
12 1 16K 512 10.5K 512 100.00 ZFS directory
168 bonus System attributes
dnode flags: USED_BYTES USERUSED_ACCOUNTED
dnode maxblkid: 0
path /export
uid 0
gid 0
atime Tue Jan 29 06:46:27 2019
mtime Sat Jun 17 14:02:19 2017
ctime Sat Jun 17 14:02:19 2017
crtime Sat Jun 17 14:02:19 2017
gen 39413
mode 040755
size 2
parent 4
links 2
pflags 0x40800000344
microzap: 512 bytes, 0 entries
root#solaris:~# zdb -e -dddd mediapool1/export
Dataset mediapool1/export [ZPL], ID 879, cr_txg 37449, 202K, 8 objects, rootbp DVA[0]=<0:903a39cc000:4000:RZM:3> [L0 DMU objset] fletcher4 lzjb LE unique unencrypted size=800L/200P birth=10836735L/10836735P fill=8 contiguous 3-copy cksum=1695577b35:7cf5a5fbca5:16bf5de66c79d:2e4ab1d9943912
Deadlist: 107K (9.00K/9.00K comp)
mintxg 0 -> obj 217
mintxg 1 -> obj 218
mintxg 37455 -> obj 219
Object lvl iblk dblk dsize lsize %full type
0 7 16K 16K 74.5K 16K 25.00 DMU dnode
dnode flags: USED_BYTES
dnode maxblkid: 0
Object lvl iblk dblk dsize lsize %full type
-1 1 16K 512 10.5K 512 100.00 ZFS user/group used
dnode flags: USED_BYTES
dnode maxblkid: 0
microzap: 512 bytes, 1 entries
0 = 0x85e0
Object lvl iblk dblk dsize lsize %full type
-2 1 16K 512 10.5K 512 100.00 ZFS user/group used
dnode flags: USED_BYTES
dnode maxblkid: 0
microzap: 512 bytes, 1 entries
0 = 0x85e0
Object lvl iblk dblk dsize lsize %full type
1 1 16K 1K 10.5K 1K 100.00 ZFS master node
dnode flags: USED_BYTES USERUSED_ACCOUNTED
dnode maxblkid: 0
microzap: 1024 bytes, 8 entries
SA_ATTRS = 0x2
DELETE_QUEUE = 0x3
SHARES = 0x7
casesensitivity = 0x2
normalization = 0
VERSION = 0x5
utf8only = 0
ROOT = 0x4
Object lvl iblk dblk dsize lsize %full type
2 1 16K 512 10.5K 512 100.00 SA master node
dnode flags: USED_BYTES USERUSED_ACCOUNTED
dnode maxblkid: 0
microzap: 512 bytes, 2 entries
REGISTRY = 0x5
LAYOUTS = 0x6
Object lvl iblk dblk dsize lsize %full type
3 1 16K 512 10.5K 512 100.00 ZFS delete queue
dnode flags: USED_BYTES USERUSED_ACCOUNTED
dnode maxblkid: 0
microzap: 512 bytes, 0 entries
Object lvl iblk dblk dsize lsize %full type
4 1 16K 512 10.5K 512 100.00 ZFS directory
168 bonus System attributes
dnode flags: USED_BYTES USERUSED_ACCOUNTED
dnode maxblkid: 0
path /
uid 0
gid 0
atime Tue Jan 29 06:46:27 2019
mtime Wed May 20 16:50:07 2015
ctime Wed May 20 16:50:07 2015
crtime Wed Mar 13 21:27:28 2013
gen 65199
mode 040755
size 3
parent 4
links 3
pflags 0x40800000344
microzap: 512 bytes, 1 entries
home = 8 (type: Directory)
Object lvl iblk dblk dsize lsize %full type
5 1 16K 1.50K 10.5K 1.50K 100.00 SA attr registration
dnode flags: USED_BYTES USERUSED_ACCOUNTED
dnode maxblkid: 0
microzap: 1536 bytes, 21 entries
ZPL_GID = 0x800000d : [8:0:13]
ZPL_DACL_ACES = 0x40013 : [0:4:19]
ZPL_CRTIME = 0x10000003 : [16:0:3]
ZPL_MAC_LABEL = 0x30014 : [0:3:20]
ZPL_ATIME = 0x10000000 : [16:0:0]
ZPL_SIZE = 0x8000006 : [8:0:6]
ZPL_LINKS = 0x8000008 : [8:0:8]
ZPL_PAD = 0x2000000e : [32:0:14]
ZPL_PARENT = 0x8000007 : [8:0:7]
ZPL_MODE = 0x8000005 : [8:0:5]
ZPL_DACL_COUNT = 0x8000010 : [8:0:16]
ZPL_SYMLINK = 0x30011 : [0:3:17]
ZPL_XATTR = 0x8000009 : [8:0:9]
ZPL_SCANSTAMP = 0x20030012 : [32:3:18]
ZPL_UID = 0x800000c : [8:0:12]
ZPL_GEN = 0x8000004 : [8:0:4]
ZPL_RDEV = 0x800000a : [8:0:10]
ZPL_FLAGS = 0x800000b : [8:0:11]
ZPL_ZNODE_ACL = 0x5803000f : [88:3:15]
ZPL_CTIME = 0x10000002 : [16:0:2]
ZPL_MTIME = 0x10000001 : [16:0:1]
Object lvl iblk dblk dsize lsize %full type
6 1 16K 16K 21.5K 32K 100.00 SA attr layouts
dnode flags: USED_BYTES USERUSED_ACCOUNTED
dnode maxblkid: 1
Fat ZAP stats:
Pointer table:
1024 elements
zt_blk: 0
zt_numblks: 0
zt_shift: 10
zt_blks_copied: 0
zt_nextblk: 0
ZAP entries: 1
Leaf blocks: 1
Total blocks: 2
zap_block_type: 0x8000000000000001
zap_magic: 0x2f52ab2ab
zap_salt: 0x751fd0cd
Leafs with 2^n pointers:
9: 1 *
Blocks with n*5 entries:
0: 1 *
Blocks n/10 full:
1: 1 *
Entries with n chunks:
4: 1 *
Buckets with n entries:
0: 511 ****************************************
1: 1 *
2 = [ 5 6 4 12 13 7 11 0 1 2 3 8 16 19 ]
Object lvl iblk dblk dsize lsize %full type
7 1 16K 512 10.5K 512 100.00 ZFS directory
168 bonus System attributes
dnode flags: USED_BYTES USERUSED_ACCOUNTED
dnode maxblkid: 0
path /.zfs/shares
uid 0
gid 0
atime Wed Mar 4 06:30:06 2015
mtime Wed Mar 13 21:27:28 2013
ctime Wed Mar 13 21:27:28 2013
crtime Wed Mar 13 21:27:28 2013
gen 65199
mode 040555
size 2
parent 7
links 2
pflags 0x40800000344
microzap: 512 bytes, 0 entries
Object lvl iblk dblk dsize lsize %full type
8 1 16K 512 10.5K 512 100.00 ZFS directory
168 bonus System attributes
dnode flags: USED_BYTES USERUSED_ACCOUNTED
dnode maxblkid: 0
path /home
uid 0
gid 0
atime Tue Jan 29 06:46:27 2019
mtime Wed Mar 13 21:27:32 2013
ctime Wed Mar 13 21:27:32 2013
crtime Wed Mar 13 21:27:32 2013
gen 65206
mode 040755
size 2
parent 4
links 2
pflags 0x40800000344
microzap: 512 bytes, 0 entries
root#solaris:~#
from my understanding ZFS should have some backup data available.
How to import these backup ?

Related

Why is my ceph cluster value(964G) of raw used in global secion far higher than that(244G) of used in pools sectio

Why is my ceph cluster value(964G) of raw used in global section far higher than that(244G) of used in pools section
[en#ceph01 ~]$ sudo ceph df
GLOBAL:
SIZE AVAIL RAW USED %RAW USED
6.00TiB 5.06TiB 964GiB 15.68
POOLS:
NAME ID USED %USED MAX AVAIL OBJECTS
.rgw.root 1 1.09KiB 0 1.56TiB 4
default.rgw.control 2 0B 0 1.56TiB 8
default.rgw.meta 3 0B 0 1.56TiB 0
default.rgw.log 4 0B 0 1.56TiB 207
cephfs_data 5 244GiB 9.22 2.34TiB 4829661
cephfs_meta 6 168MiB 0 2.34TiB 4160
[en#ceph01 ~]$ sudo ceph osd df
ID CLASS WEIGHT REWEIGHT SIZE USE DATA OMAP META AVAIL %USE VAR PGS
0 hdd 2.00000 1.00000 2.00TiB 331GiB 326GiB 1.64GiB 3.38GiB 1.68TiB 16.17 1.03 77
1 hdd 2.00000 1.00000 2.00TiB 346GiB 341GiB 1.69GiB 3.51GiB 1.66TiB 16.90 1.08 78
2 hdd 2.00000 1.00000 2.00TiB 286GiB 282GiB 1.31GiB 2.96GiB 1.72TiB 13.97 0.89 69
TOTAL 6.00TiB 964GiB 949GiB 4.64GiB 9.86GiB 5.06TiB 15.68
MIN/MAX VAR: 0.89/1.08 STDDEV: 1.24
info about ceph cluster:
>pool 5 'cephfs_data' replicated size 2 min_size 2 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 last_change 33 flags hashpspool stripe_width 0 application cephfs..
>pool 6 'cephfs_meta' replicated size 2min_size 2 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 last_change 31 flags hashpspool stripe_width 0 application cephfs
> max_osd 3
This is due to bluestore_min_alloc_size_hdd being most likely set at 64K.
More info here: ceph df (octopus) shows USED is 7 times higher than STORED in erasure coded pool

How to remove all the rows from a matrix that match values in another vector?

I am making an exclude vector, so that the rows containing any value present in the second column of the matrix user from the exclude list are removed. How do I do that efficiently, without using a for loop to iterate through user for each item in exclude one by one?
My code below does not work:
count=0;
% Just showing how I am constructing `exclude`, to show that it can be long.
% So, manually removing each item from `exclude` is not an option.
% And using a for loop to iterate through each element in `exclude` can be inefficient.
for b=1:size(user_cat,1)
if user_cat(b,4)==0
count=count+1;
exclude(count,1) = user_cat(b,1);
end
end
% This is the important line of focus. You can ignore the previous parts.
user = user(user(:,2)~=exclude(:),:);
The last line gives the following error:
Error using ~=
Matrix dimensions must agree.
So, I am having to use this instead:
for b=1:size(exclude,1)
user = user(user(:,2)~=exclude(b,1),:);
end
Example:
user=[1433100000.00000 26 620260 7 1433100000000.00 0 0 2 1 100880 290 23
1433100000.00000 26 620260 7 1433100000000.00 0 0 2 1 100880 290 23
1433100000.00000 25 620160 7 1433100000000.00 0 0 2 1 100880 7274 22
1433100000.00000 21 619910 7 1433100000000.00 24.1190000000000 120.670000000000 2 0 100880 53871 21
1433100000.00000 19 620040 7 1433100000000.00 24.1190000000000 120.670000000000 2 0 100880 22466 21
1433100000.00000 28 619030 7 1433100000000.00 24.6200000000000 120.810000000000 2 0 100880 179960 16
1433100000.00000 28 619630 7 1433100000000.00 24.6200000000000 120.810000000000 2 0 100880 88510 16
1433100000.00000 28 619790 7 1433100000000.00 24.6200000000000 120.810000000000 2 0 100880 12696 16
1433100000.00000 7 36582000 7 1433100000000.00 0 0 2 0 100880 33677 14
1433000000.00000 24 620010 7 1433000000000.00 0 0 2 1 100880 3465 14
1433000000.00000 4 36581000 7 1433000000000.00 0 0 2 0 100880 27809 12
1433000000.00000 20 619960 7 1433000000000.00 0 0 2 1 100880 860 11
1433000000.00000 30 619760 7 1433000000000.00 25.0060000000000 121.510000000000 2 0 100880 34706 10
1433000000.00000 33 619910 7 1433000000000.00 0 0 2 0 100880 15060 9
1433000000.00000 26 619740 6 1433000000000.00 0 0 2 0 100880 52514 8
1433000000.00000 18 619900 6 1433000000000.00 0 0 2 0 100880 21696 8
1433000000.00000 16 619850 6 1433000000000.00 24.9910000000000 121.470000000000 2 0 100880 10505 1
1433000000.00000 16 619880 6 1433000000000.00 24.9910000000000 121.470000000000 2 0 100880 1153 1
1433000000.00000 28 619120 6 1433000000000.00 0 0 2 0 100880 103980 24
1433000000.00000 21 619870 6 1433000000000.00 0 0 2 0 100880 1442 24];
exclude=[ 3
4
7
10
17
18
19
28
30
33 ];
Desired output:
1433100000.00000 26 620260 7 1433100000000.00 0 0 2 1 100880 290 23
1433100000.00000 26 620260 7 1433100000000.00 0 0 2 1 100880 290 23
1433100000.00000 25 620160 7 1433100000000.00 0 0 2 1 100880 7274 22
1433100000.00000 21 619910 7 1433100000000.00 24.1190000000000 120.670000000000 2 0 100880 53871 21
1433000000.00000 24 620010 7 1433000000000.00 0 0 2 1 100880 3465 14
1433000000.00000 20 619960 7 1433000000000.00 0 0 2 1 100880 860 11
1433000000.00000 26 619740 6 1433000000000.00 0 0 2 0 100880 52514 8
1433000000.00000 16 619850 6 1433000000000.00 24.9910000000000 121.470000000000 2 0 100880 10505 1
1433000000.00000 16 619880 6 1433000000000.00 24.9910000000000 121.470000000000 2 0 100880 1153 1
1433000000.00000 21 619870 6 1433000000000.00 0 0 2 0 100880 1442 24
Use ismember to find the indices of the second column of user where elements of exclude exist to get the indices of the rows to be removed. Negate these row indices to get the row indices to be kept and use matrix indexing to keep these rows.
user = user(~ismember(user(:,2),exclude),:);

Why does !heap -s heap not work the way intended?

My WinDBG version is 10.0.10240.9 AMD64 and while casually debugging some native memory dump I realized that my !heap command behaves different than described and I am unable to figure out why.
There are plenty of resources mentioning !heap -s:
https://msdn.microsoft.com/en-us/library/windows/hardware/ff563189%28v=vs.85%29.aspx?f=255&MSPPError=-2147217396
http://windbg.info/doc/1-common-cmds.html
When I execute !heap -s
I get this truncated list:
0:000> !heap -s
************************************************************************************************************************
NT HEAP STATS BELOW
************************************************************************************************************************
LFH Key : 0x000000c42ceaf6ca
Termination on corruption : ENABLED
Heap Flags Reserv Commit Virt Free List UCR Virt Lock Fast
(k) (k) (k) (k) length blocks cont. heap
-------------------------------------------------------------------------------------
Virtual block: 0000000003d40000 - 0000000003d40000 (size 0000000000000000)
... many more virtual blocks
0000000000b90000 00000002 3237576 3220948 3237576 20007 1749 204 359 0 LFH
0000000000010000 00008000 64 8 64 5 1 1 0 0
... more heaps
-------------------------------------------------------------------------------------
Ok fine, b90000 looks big but contrary to those docs above and !heap -s -? I cannot get information for this heap, each of those commands produce the exact same output as seen above (as if I would not specify anything after -s):
!heap -s b90000
!heap -s -h b90000
!heap -s 1
I get a load of virtual blocks and a dump of all heaps instead of the single specified one.
Anyone having the same issue?
My "Windows Debugger Version 10.0.10586.567 AMD64" behaved like yours, but
“Microsoft (R) Windows Debugger Version 6.3.9600.16384 AMD64” I have in in:
C:\Program Files\Windows Kits\8.1\Debuggers\x64
0:000> !heap -s -h 0000000000220000
Walking the heap 0000000000220000 ..................Virtual block: 0000000015f20000 - 0000000015f20000 (size 0000000000000000)
Virtual block: 000000001b2e0000 - 000000001b2e0000 (size 0000000000000000)
Virtual block: 000000001f1e0000 - 000000001f1e0000 (size 0000000000000000)
Virtual block: 0000000023c10000 - 0000000023c10000 (size 0000000000000000)
Virtual block: 000000001c060000 - 000000001c060000 (size 0000000000000000)
Virtual block: 000000001ddc0000 - 000000001ddc0000 (size 0000000000000000)
0: Heap 0000000000220000
Flags 00000002 - HEAP_GROWABLE
Reserved memory in segments 226880 (k)
Commited memory in segments 218204 (k)
Virtual bytes (correction for large UCR) 218740 (k)
Free space 12633 (k) (268 blocks)
External fragmentation 5% (268 free blocks)
Virtual address fragmentation 0% (30 uncommited ranges)
Virtual blocks 6 - total 0 KBytes
Lock contention 0
Segments 1
Low fragmentation heap 00000000002291e0
Lock contention 0
Metadata usage 90112 bytes
Statistics:
Segments created 993977
Segments deleted 992639
Segments reused 0
Block cache:
3: 1024 bytes ( 17, 0)
4: 2048 bytes ( 42, 0)
5: 4096 bytes ( 114, 0)
6: 8192 bytes ( 231, 2)
7: 16384 bytes ( 129, 9)
8: 32768 bytes ( 128, 11)
9: 65536 bytes ( 265, 58)
10: 131072 bytes ( 357, 8)
11: 262144 bytes ( 192, 49)
Buckets info:
Size Blocks Seg Empty Aff Distribution
------------------------------------------------
------------------------------------------------
Default heap Front heap Unused bytes
Range (bytes) Busy Free Busy Free Total Average
------------------------------------------------------------------
0 - 1024 577 140 1035286 11608 10563036 10
1024 - 2048 173 3 586 374 27779 36
2048 - 3072 17 19 47 224 1605 25
3072 - 4096 20 12 1 126 348 16
4096 - 5120 35 3 1 30 677 18
5120 - 6144 2 8 0 0 33 16
6144 - 7168 5 9 0 0 56 11
7168 - 8192 0 11 0 0 0 0
8192 - 9216 14 0 0 15 236 16
9216 - 10240 1 0 0 0 8 8
12288 - 13312 1 0 0 0 17 17
14336 - 15360 1 0 0 18 1 1
15360 - 16384 1 0 0 0 32 32
16384 - 17408 10 0 0 0 160 16
22528 - 23552 1 0 0 0 9 9
23552 - 24576 2 0 0 0 32 16
27648 - 28672 1 0 0 0 8 8
30720 - 31744 0 1 0 0 0 0
32768 - 33792 18 0 0 0 250 13
33792 - 34816 0 1 0 0 0 0
39936 - 40960 0 2 0 0 0 0
40960 - 41984 0 1 0 0 0 0
43008 - 44032 0 2 0 0 0 0
44032 - 45056 0 5 0 0 0 0
45056 - 46080 0 1 0 0 0 0
46080 - 47104 0 2 0 0 0 0
47104 - 48128 0 1 0 0 0 0
49152 - 50176 0 3 0 0 0 0
50176 - 51200 1 0 0 0 16 16
51200 - 52224 0 4 0 0 0 0
57344 - 58368 0 1 0 0 0 0
58368 - 59392 0 1 0 0 0 0
62464 - 63488 0 1 0 0 0 0
63488 - 64512 200 1 0 0 3200 16
64512 - 65536 0 1 0 0 0 0
65536 - 66560 1029 2 0 0 10624 10
79872 - 80896 100 0 0 0 900 9
131072 - 132096 9 0 0 0 144 16
193536 - 194560 1 0 0 0 9 9
224256 - 225280 1 0 0 0 16 16
262144 - 263168 49 27 0 0 784 16
327680 - 328704 1 0 0 0 17 17
384000 - 385024 0 1 0 0 0 0
523264 - 524288 1 5 0 0 23 23
------------------------------------------------------------------
Total 2271 268 1035921 12395 10610020 10
This might be a walkaround,
can’t answer why the win 10 version don’t work :-(

How to remove blank lines from text file using powershell

I am building a parser in powershell for converting vmstat log dumps to CSV files as an input to a graphing framework (Rickshaw). I have repeating 'headers' in the file which I would like to remove. Data sample is as below:
Tue Sep 1 14:03:26 2015: procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu-----
Tue Sep 1 14:03:26 2015: r b swpd free buff cache si so bi bo in cs us sy id wa st
Tue Sep 1 14:03:26 2015: 0 1 224412 358316 248772 63286912 0 0 388 267 1 1 8 0 91 1 0
Tue Sep 1 14:03:36 2015: 0 0 224412 357572 248796 63286916 0 0 0 8 220 261 0 0 100 0 0
Tue Sep 1 14:03:46 2015: 0 0 224412 357696 248808 63286916 0 0 0 14 276 293 0 0 100 0 0
Tue Sep 1 14:03:56 2015: 0 0 224412 357688 248808 63286916 0 0 0 13 231 269 0 0 100 0 0
Tue Sep 1 14:04:06 2015: 0 0 224412 357300 248812 63286920 0 0 0 17 266 283 0 0 100 0 0
Tue Sep 1 14:06:56 2015: procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu-----
Tue Sep 1 14:06:56 2015: r b swpd free buff cache si so bi bo in cs us sy id wa st
Tue Sep 1 14:06:56 2015: 1 0 224412 357348 248976 63286928 0 0 0 1 182 231 0 0 100 0 0
Tue Sep 1 14:07:06 2015: 0 0 224412 357348 248980 63286928 0 0 0 9 211 251 0 0 100 0 0
Tue Sep 1 14:07:16 2015: 0 0 224412 357136 248988 63286928 0 0 0 19 287 279 0 0 100 0 0
Tue Sep 1 14:07:26 2015: 0 0 224412 357012 249004 63286928 0 0 0 9 199 244 0 0 100 0 0
Tue Sep 1 14:07:36 2015: 0 0 224412 357080 249012 63286928 0 0 0 7 235 258 0 0 100 0 0
Tue Sep 1 14:10:26 2015: procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu-----
Tue Sep 1 14:10:26 2015: r b swpd free buff cache si so bi bo in cs us sy id wa st
Tue Sep 1 14:10:26 2015: 12 0 224400 351832 265992 62560000 6 0 15 25262 8579 617 96 4 0 0 0
Tue Sep 1 14:10:36 2015: 12 0 224400 379200 266064 62444728 0 0 2 16727 8418 761 97 3 0 0 0
I use this bit of code to get that done.
Get-Content "C:\Projects\Play\Garage\Data_Processing\Sampler.log" | select-string -pattern 'procs|swpd' -notmatch | Out-File "C:\Projects\Play\Garage\Data_Processing\Refined.log"
The resulting file has the desired lines removed but instead have blank lines inserted at the beginning and towards the end. Because of this, I am unable to send this data/file to the next step of parsing. What could I be doing wrong?
Resultant File data:
> [BLANK LINE]
Tue Sep 1 14:03:26 2015: 0 1 224412 358316 248772 63286912 0 0 388 267 1 1 8 0 91 1 0
Tue Sep 1 14:03:36 2015: 0 0 224412 357572 248796 63286916 0 0 0 8 220 261 0 0 100 0 0
Tue Sep 1 14:03:46 2015: 0 0 224412 357696 248808 63286916 0 0 0 14 276 293 0 0 100 0 0
Tue Sep 1 14:03:56 2015: 0 0 224412 357688 248808 63286916 0 0 0 13 231 269 0 0 100 0 0
Tue Sep 1 14:04:06 2015: 0 0 224412 357300 248812 63286920 0 0 0 17 266 283 0 0 100 0 0
Tue Sep 1 14:06:56 2015: 1 0 224412 357348 248976 63286928 0 0 0 1 182 231 0 0 100 0 0
Tue Sep 1 14:07:06 2015: 0 0 224412 357348 248980 63286928 0 0 0 9 211 251 0 0 100 0 0
Tue Sep 1 14:07:16 2015: 0 0 224412 357136 248988 63286928 0 0 0 19 287 279 0 0 100 0 0
Tue Sep 1 14:07:26 2015: 0 0 224412 357012 249004 63286928 0 0 0 9 199 244 0 0 100 0 0
Tue Sep 1 14:07:36 2015: 0 0 224412 357080 249012 63286928 0 0 0 7 235 258 0 0 100 0 0
Tue Sep 1 14:10:26 2015: 12 0 224400 351832 265992 62560000 6 0 15 25262 8579 617 96 4 0 0 0
Tue Sep 1 14:10:36 2015: 12 0 224400 379200 266064 62444728 0 0 2 16727 8418 761 97 3 0 0 0
>[BLANK LINE]
>[BLANK LINE]
>[BLANK LINE]
Not sure why Select-String is making empty lines but you could replace Select-String with a simple Where-Object which would not return the empty lines
Here's how i would do it:
Get-Content "C:\Projects\Play\Garage\Data_Processing\Sampler.log" | Where-Object -FilterScript {$_ -notmatch 'procs|swpd'} | Out-File "C:\Projects\Play\Garage\Data_Processing\Refined.log"

CEPH raw space usage

I can't understand, where my ceph raw space is gone.
cluster 90dc9682-8f2c-4c8e-a589-13898965b974
health HEALTH_WARN 72 pgs backfill; 26 pgs backfill_toofull; 51 pgs backfilling; 141 pgs stuck unclean; 5 requests are blocked > 32 sec; recovery 450170/8427917 objects degraded (5.341%); 5 near full osd(s)
monmap e17: 3 mons at {enc18=192.168.100.40:6789/0,enc24=192.168.100.43:6789/0,enc26=192.168.100.44:6789/0}, election epoch 734, quorum 0,1,2 enc18,enc24,enc26
osdmap e3326: 14 osds: 14 up, 14 in
pgmap v5461448: 1152 pgs, 3 pools, 15252 GB data, 3831 kobjects
31109 GB used, 7974 GB / 39084 GB avail
450170/8427917 objects degraded (5.341%)
18 active+remapped+backfill_toofull
1011 active+clean
64 active+remapped+wait_backfill
8 active+remapped+wait_backfill+backfill_toofull
51 active+remapped+backfilling
recovery io 58806 kB/s, 14 objects/s
OSD tree (each host has 2 OSD):
# id weight type name up/down reweight
-1 36.45 root default
-2 5.44 host enc26
0 2.72 osd.0 up 1
1 2.72 osd.1 up 0.8227
-3 3.71 host enc24
2 0.99 osd.2 up 1
3 2.72 osd.3 up 1
-4 5.46 host enc22
4 2.73 osd.4 up 0.8
5 2.73 osd.5 up 1
-5 5.46 host enc18
6 2.73 osd.6 up 1
7 2.73 osd.7 up 1
-6 5.46 host enc20
9 2.73 osd.9 up 0.8
8 2.73 osd.8 up 1
-7 0 host enc28
-8 5.46 host archives
12 2.73 osd.12 up 1
13 2.73 osd.13 up 1
-9 5.46 host enc27
10 2.73 osd.10 up 1
11 2.73 osd.11 up 1
Real usage:
/dev/rbd0 14T 7.9T 5.5T 59% /mnt/ceph
Pool size:
osd pool default size = 2
Pools:
ceph osd lspools
0 data,1 metadata,2 rbd,
rados df
pool name category KB objects clones degraded unfound rd rd KB wr wr KB
data - 0 0 0 0 0 0 0 0 0
metadata - 0 0 0 0 0 0 0 0 0
rbd - 15993591918 3923880 0 444545 0 82936 1373339 2711424 849398218
total used 32631712348 3923880
total avail 8351008324
total space 40982720672
Raw usage is 4x real usage. As I understand, it must be 2x ?
Yes, it must be 2x. I don't really shure, that the real raw usage is 7.9T. Why do you check this value on mapped disk?
This are my pools:
pool name KB objects clones degraded unfound rd rd KB wr wr KB
admin-pack 7689982 1955 0 0 0 693841 3231750 40068930 353462603
public-cloud 105432663 26561 0 0 0 13001298 638035025 222540884 3740413431
rbdkvm_sata 32624026697 7968550 31783 0 0 4950258575 232374308589 12772302818 278106113879
total used 98289353680 7997066
total avail 34474223648
total space 132763577328
You can see, that the total amount of used space is 3 times more than the used space in the pool rbdkvm_sata (+-).
ceph -s shows the same result too:
pgmap v11303091: 5376 pgs, 3 pools, 31220 GB data, 7809 kobjects
93736 GB used, 32876 GB / 123 TB avail
I don't think you have just one rbd image. The result of "ceph osd lspools" indicated that you had 3 pools and one of pools had name "metadata".(Maybe you were using cephfs). /dev/rbd0 was appeared because you mapped the image but you could have other images also. To list the images you can use "rbd list -p ". You can see the image info with "rbd info -p "