mount - can't find /dev/sdc in fstab error - centos

I'm trying to mount xfs disk.
I prepared directories.
sudo mkdir /grid/;
sudo mkdir /grid/0;
then I tried
sudo mount -t xfs /dev/sdc /grid/0;
But I always receive
can't find /dev/sdc /grid/0 in /etc/fstab
My fstab looks like this
#
# /etc/fstab
# Created by anaconda on Tue Dec 18 11:05:19 2018
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/rhel-root / xfs defaults 0 0
UUID=xxx /boot xfs defaults 0 0
UUID=zzz /boot/efi vfat umask=0077,shortname=winnt 0 0
/dev/mapper/rhel-home /home xfs defaults 0 0
/dev/mapper/rhel-var_log /var/log xfs defaults 0 0
/dev/mapper/rhel-swap swap swap defaults 0 0
/dev/sdc /grid/0 xfs defaults,noatime 0 0
/dev/sdd /grid/1 xfs defaults,noatime 0 0
/dev/sde /grid/2 xfs defaults,noatime 0 0
/dev/sdf /grid/3 xfs defaults,noatime 0 0
/dev/sdg /grid/4 xfs defaults,noatime 0 0
/dev/sdh /grid/5 xfs defaults,noatime 0 0
/dev/sdi /grid/6 xfs defaults,noatime 0 0
/dev/sdj /grid/7 xfs defaults,noatime 0 0
is something wrong with my fstab ? I can't find what I did wrong.
edit: lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 447.1G 0 disk
\u251c\u2500sda1 8:1 0 200M 0 part /boot/efi
\u251c\u2500sda2 8:2 0 1G 0 part /boot
\u2514\u2500sda3 8:3 0 445.9G 0 part
\u251c\u2500rhel-root 253:0 0 361.9G 0 lvm /
\u251c\u2500rhel-swap 253:1 0 4G 0 lvm [SWAP]
\u251c\u2500rhel-home 253:2 0 50G 0 lvm /home
\u2514\u2500rhel-var_log 253:3 0 30G 0 lvm /var/log
sdc 8:32 0 3.7T 0 disk
sdd 8:48 0 3.7T 0 disk
sde 8:64 0 3.7T 0 disk
sdf 8:80 0 3.7T 0 disk
sdg 8:96 0 3.7T 0 disk
sdh 8:112 0 3.7T 0 disk
sdi 8:128 0 3.7T 0 disk
sdj 8:144 0 3.7T 0 disk

The issue was caused by non-breakable space which was mixed in fstab records and in mount command.
After converting text to ASCII and checking on "Show All Characters" in notepad++ I replaced all non-breakable space with ordinary space and command finished successfully.

Related

Why does !heap -s heap not work the way intended?

My WinDBG version is 10.0.10240.9 AMD64 and while casually debugging some native memory dump I realized that my !heap command behaves different than described and I am unable to figure out why.
There are plenty of resources mentioning !heap -s:
https://msdn.microsoft.com/en-us/library/windows/hardware/ff563189%28v=vs.85%29.aspx?f=255&MSPPError=-2147217396
http://windbg.info/doc/1-common-cmds.html
When I execute !heap -s
I get this truncated list:
0:000> !heap -s
************************************************************************************************************************
NT HEAP STATS BELOW
************************************************************************************************************************
LFH Key : 0x000000c42ceaf6ca
Termination on corruption : ENABLED
Heap Flags Reserv Commit Virt Free List UCR Virt Lock Fast
(k) (k) (k) (k) length blocks cont. heap
-------------------------------------------------------------------------------------
Virtual block: 0000000003d40000 - 0000000003d40000 (size 0000000000000000)
... many more virtual blocks
0000000000b90000 00000002 3237576 3220948 3237576 20007 1749 204 359 0 LFH
0000000000010000 00008000 64 8 64 5 1 1 0 0
... more heaps
-------------------------------------------------------------------------------------
Ok fine, b90000 looks big but contrary to those docs above and !heap -s -? I cannot get information for this heap, each of those commands produce the exact same output as seen above (as if I would not specify anything after -s):
!heap -s b90000
!heap -s -h b90000
!heap -s 1
I get a load of virtual blocks and a dump of all heaps instead of the single specified one.
Anyone having the same issue?
My "Windows Debugger Version 10.0.10586.567 AMD64" behaved like yours, but
“Microsoft (R) Windows Debugger Version 6.3.9600.16384 AMD64” I have in in:
C:\Program Files\Windows Kits\8.1\Debuggers\x64
0:000> !heap -s -h 0000000000220000
Walking the heap 0000000000220000 ..................Virtual block: 0000000015f20000 - 0000000015f20000 (size 0000000000000000)
Virtual block: 000000001b2e0000 - 000000001b2e0000 (size 0000000000000000)
Virtual block: 000000001f1e0000 - 000000001f1e0000 (size 0000000000000000)
Virtual block: 0000000023c10000 - 0000000023c10000 (size 0000000000000000)
Virtual block: 000000001c060000 - 000000001c060000 (size 0000000000000000)
Virtual block: 000000001ddc0000 - 000000001ddc0000 (size 0000000000000000)
0: Heap 0000000000220000
Flags 00000002 - HEAP_GROWABLE
Reserved memory in segments 226880 (k)
Commited memory in segments 218204 (k)
Virtual bytes (correction for large UCR) 218740 (k)
Free space 12633 (k) (268 blocks)
External fragmentation 5% (268 free blocks)
Virtual address fragmentation 0% (30 uncommited ranges)
Virtual blocks 6 - total 0 KBytes
Lock contention 0
Segments 1
Low fragmentation heap 00000000002291e0
Lock contention 0
Metadata usage 90112 bytes
Statistics:
Segments created 993977
Segments deleted 992639
Segments reused 0
Block cache:
3: 1024 bytes ( 17, 0)
4: 2048 bytes ( 42, 0)
5: 4096 bytes ( 114, 0)
6: 8192 bytes ( 231, 2)
7: 16384 bytes ( 129, 9)
8: 32768 bytes ( 128, 11)
9: 65536 bytes ( 265, 58)
10: 131072 bytes ( 357, 8)
11: 262144 bytes ( 192, 49)
Buckets info:
Size Blocks Seg Empty Aff Distribution
------------------------------------------------
------------------------------------------------
Default heap Front heap Unused bytes
Range (bytes) Busy Free Busy Free Total Average
------------------------------------------------------------------
0 - 1024 577 140 1035286 11608 10563036 10
1024 - 2048 173 3 586 374 27779 36
2048 - 3072 17 19 47 224 1605 25
3072 - 4096 20 12 1 126 348 16
4096 - 5120 35 3 1 30 677 18
5120 - 6144 2 8 0 0 33 16
6144 - 7168 5 9 0 0 56 11
7168 - 8192 0 11 0 0 0 0
8192 - 9216 14 0 0 15 236 16
9216 - 10240 1 0 0 0 8 8
12288 - 13312 1 0 0 0 17 17
14336 - 15360 1 0 0 18 1 1
15360 - 16384 1 0 0 0 32 32
16384 - 17408 10 0 0 0 160 16
22528 - 23552 1 0 0 0 9 9
23552 - 24576 2 0 0 0 32 16
27648 - 28672 1 0 0 0 8 8
30720 - 31744 0 1 0 0 0 0
32768 - 33792 18 0 0 0 250 13
33792 - 34816 0 1 0 0 0 0
39936 - 40960 0 2 0 0 0 0
40960 - 41984 0 1 0 0 0 0
43008 - 44032 0 2 0 0 0 0
44032 - 45056 0 5 0 0 0 0
45056 - 46080 0 1 0 0 0 0
46080 - 47104 0 2 0 0 0 0
47104 - 48128 0 1 0 0 0 0
49152 - 50176 0 3 0 0 0 0
50176 - 51200 1 0 0 0 16 16
51200 - 52224 0 4 0 0 0 0
57344 - 58368 0 1 0 0 0 0
58368 - 59392 0 1 0 0 0 0
62464 - 63488 0 1 0 0 0 0
63488 - 64512 200 1 0 0 3200 16
64512 - 65536 0 1 0 0 0 0
65536 - 66560 1029 2 0 0 10624 10
79872 - 80896 100 0 0 0 900 9
131072 - 132096 9 0 0 0 144 16
193536 - 194560 1 0 0 0 9 9
224256 - 225280 1 0 0 0 16 16
262144 - 263168 49 27 0 0 784 16
327680 - 328704 1 0 0 0 17 17
384000 - 385024 0 1 0 0 0 0
523264 - 524288 1 5 0 0 23 23
------------------------------------------------------------------
Total 2271 268 1035921 12395 10610020 10
This might be a walkaround,
can’t answer why the win 10 version don’t work :-(

CEPH raw space usage

I can't understand, where my ceph raw space is gone.
cluster 90dc9682-8f2c-4c8e-a589-13898965b974
health HEALTH_WARN 72 pgs backfill; 26 pgs backfill_toofull; 51 pgs backfilling; 141 pgs stuck unclean; 5 requests are blocked > 32 sec; recovery 450170/8427917 objects degraded (5.341%); 5 near full osd(s)
monmap e17: 3 mons at {enc18=192.168.100.40:6789/0,enc24=192.168.100.43:6789/0,enc26=192.168.100.44:6789/0}, election epoch 734, quorum 0,1,2 enc18,enc24,enc26
osdmap e3326: 14 osds: 14 up, 14 in
pgmap v5461448: 1152 pgs, 3 pools, 15252 GB data, 3831 kobjects
31109 GB used, 7974 GB / 39084 GB avail
450170/8427917 objects degraded (5.341%)
18 active+remapped+backfill_toofull
1011 active+clean
64 active+remapped+wait_backfill
8 active+remapped+wait_backfill+backfill_toofull
51 active+remapped+backfilling
recovery io 58806 kB/s, 14 objects/s
OSD tree (each host has 2 OSD):
# id weight type name up/down reweight
-1 36.45 root default
-2 5.44 host enc26
0 2.72 osd.0 up 1
1 2.72 osd.1 up 0.8227
-3 3.71 host enc24
2 0.99 osd.2 up 1
3 2.72 osd.3 up 1
-4 5.46 host enc22
4 2.73 osd.4 up 0.8
5 2.73 osd.5 up 1
-5 5.46 host enc18
6 2.73 osd.6 up 1
7 2.73 osd.7 up 1
-6 5.46 host enc20
9 2.73 osd.9 up 0.8
8 2.73 osd.8 up 1
-7 0 host enc28
-8 5.46 host archives
12 2.73 osd.12 up 1
13 2.73 osd.13 up 1
-9 5.46 host enc27
10 2.73 osd.10 up 1
11 2.73 osd.11 up 1
Real usage:
/dev/rbd0 14T 7.9T 5.5T 59% /mnt/ceph
Pool size:
osd pool default size = 2
Pools:
ceph osd lspools
0 data,1 metadata,2 rbd,
rados df
pool name category KB objects clones degraded unfound rd rd KB wr wr KB
data - 0 0 0 0 0 0 0 0 0
metadata - 0 0 0 0 0 0 0 0 0
rbd - 15993591918 3923880 0 444545 0 82936 1373339 2711424 849398218
total used 32631712348 3923880
total avail 8351008324
total space 40982720672
Raw usage is 4x real usage. As I understand, it must be 2x ?
Yes, it must be 2x. I don't really shure, that the real raw usage is 7.9T. Why do you check this value on mapped disk?
This are my pools:
pool name KB objects clones degraded unfound rd rd KB wr wr KB
admin-pack 7689982 1955 0 0 0 693841 3231750 40068930 353462603
public-cloud 105432663 26561 0 0 0 13001298 638035025 222540884 3740413431
rbdkvm_sata 32624026697 7968550 31783 0 0 4950258575 232374308589 12772302818 278106113879
total used 98289353680 7997066
total avail 34474223648
total space 132763577328
You can see, that the total amount of used space is 3 times more than the used space in the pool rbdkvm_sata (+-).
ceph -s shows the same result too:
pgmap v11303091: 5376 pgs, 3 pools, 31220 GB data, 7809 kobjects
93736 GB used, 32876 GB / 123 TB avail
I don't think you have just one rbd image. The result of "ceph osd lspools" indicated that you had 3 pools and one of pools had name "metadata".(Maybe you were using cephfs). /dev/rbd0 was appeared because you mapped the image but you could have other images also. To list the images you can use "rbd list -p ". You can see the image info with "rbd info -p "

MongoDB: Increasing read speed on a single machine

I use MongoDB to store price events for stocks. Depending on what you want to screen, the number of event can rapidly grow to 1Go-2Go.
I run MongoDB on a single machine and it is taking longer and longer to load the data. I am not able to find a clear answer on the web if "sharding on a single server" is a benefit to read speed.
Is it the right path to increase the read speed?
insert query update delete getmore command flushes mapped vsize res faults locked db
0 1 395 0 1 395 0 63.9g 128g 2.53g 7484 prices:70.5%
0 0 14726 0 5 14728 0 63.9g 128g 2.48g 31555 prices:7.4%
0 0 0 0 0 1 0 63.9g 128g 2.48g 436 prices:0.0%
0 0 0 0 1 1 0 63.9g 128g 2.48g 0 prices:0.0%
0 0 0 0 0 1 1 63.9g 128g 2.49g 3877 .:83.9%
0 0 0 0 0 1 0 63.9g 128g 2.49g 0 prices:0.0%

"netIn" in mongostat output

I have wrote a test script which did millions of updates(using update query) in a collection. Following is the mongostat output
insert query update delete getmore command flushes mapped vsize res faults locked % idx miss % qr|qw ar|aw netIn netOut conn time
0 0 21156 0 0 1 0 208m 2.45g 119m 0 81.7 0 0|8 0|9 2m 1k 10 12:52:11
0 0 20620 0 0 1 0 208m 2.45g 119m 0 82.5 0 0|8 0|9 1m 1k 10 12:52:12
0 0 21915 0 0 1 0 208m 2.45g 119m 0 81.9 0 0|8 0|9 2m 1k 10 12:52:13
0 0 21634 0 0 1 0 208m 2.45g 119m 0 82.1 0 0|8 0|9 2m 1k 10 12:52:15
0 0 19793 0 0 1 0 208m 2.45g 119m 0 81.8 0 0|8 0|9 1m 1k 10 12:52:16
0 0 22062 0 0 1 0 208m 2.45g 119m 0 81.9 0 0|8 0|8 2m 1k 10 12:52:17
0 0 23395 0 0 1 0 208m 2.45g 119m 0 81.9 0 0|8 0|8 2m 1k 10 12:52:19
The netIn says the total network in bytes per second, i hope. Is there any way to increase the size of netIn to some mb, so that i can increase the update statement per second.
I don't think you understand the netIn statistic. It isn't some limit but the actual amount of data received by MongoDB per interval sample. In other words, the netIn value will increase if you (can) do more updates.
Increasing update throughput itself may be possible but is very application specific.

Memcached using more than max memory

i have an installation on memcache which i want to use in my production environment but when i have ran a couple of tests it seems that memcache doesn't free up memory even after it has used up all of it allocated memory, Also i logged in and ran a flush_all command but the objects are still in the cache.
Here are outputs from some tests
memcached-tool
memcache-top v0.6 (default port: 11211, color: on, refresh: 3 seconds)
INSTANCE USAGE HIT % CONN TIME EVICT/s READ/s WRITE/s
127.0.0.1:11211 427.1% 0.0% 18 1.4ms 0.0 244 261.0K
AVERAGE: 427.1% 0.0% 18 1.4ms 0.0 244 261.0K
TOTAL: 4.3MB/ 1.0MB 18 1.4ms 0.0 244 261.0K
memcached-tool 127.0.0.1:11211 display
No Item_Size Max_age Pages Count Full? Evicted Evict_Time OOM
1 560B 4s 1 1872 yes 0 0 15488
2 704B 32s 1 559 no 0 0 0
3 880B 4s 1 1191 yes 0 0 1335
4 1.1K 9s 1 116 no 0 0 0
5 1.4K 21s 1 14 no 0 0 0
6 1.7K 4s 1 17 no 0 0 0
7 2.1K 84s 1 24 no 0 0 0
8 2.7K 130s 1 60 no 0 0 0
9 3.3K 25s 1 290 no 0 0 0
10 4.2K 9s 1 194 no 0 0 0
11 5.2K 9s 1 116 no 0 0 0
15 12.7K 816s 1 1 no 0 0 0
16 15.9K 769s 1 5 no 0 0 0
18 24.8K 786s 1 1 no 0 0 0
21 48.5K 816s 1 1 no 0 0 0
memcached-tool 127.0.0.1:11211 stats
127.0.0.1:11211 Field Value
accepting_conns 1
auth_cmds 0
auth_errors 0
bytes 4478060
bytes_read 23964596
bytes_written 546642860
cas_badval 0
cas_hits 0
cas_misses 0
cmd_flush 0
cmd_get 240894
cmd_set 4504
conn_yields 0
connection_structures 21
curr_connections 18
curr_items 4461
decr_hits 0
decr_misses 0
delete_hits 0
delete_misses 0
evictions 0
get_hits 43756
get_misses 197138
incr_hits 0
incr_misses 0
limit_maxbytes 1048576
listen_disabled_num 0
pid 8731
pointer_size 64
reclaimed 0
rusage_system 5.047232
rusage_user 4.311344
threads 4
time 1306247929
total_connections 3092
total_items 4504
uptime 1240
version 1.4.5
-m tells memcached how much RAM to use for item storage (in megabytes). Note
carefully that this isn't a global
memory limit, so memcached will use a
few % more memory than you tell it to.
Set this to safe values. Setting it to
less than 48 megabytes does not work
properly in 1.4.x and earlier. It will
still use the memory.
Source: https://github.com/memcached/memcached/wiki/ConfiguringServer#commandline-arguments