Recover from failed Ceph Cluster - Inactive PGs (Down) - ceph

Ceph Cluster PGs inactive/down.
I had a healthy cluster and tried adding a new node using ceph-deploy tool. I didn't put enable noout flag before adding node to cluster.
So while using ceph-deploy tool, I ended up deleting new OSD nodes couple of times and it looks like Ceph tries to balance PGs and now those PGs are inactive/down state.
I tried recovering one PG just to see if it recover but that's not the case. I am using ceph to manage OpenStack glance images and VMs. So now all new VMs and existing VMs are slow or not responding.
Current Output of Ceph tree: (Note fre201 is new node. I have recently disabled OSD services on that node)
[root#fre201 ceph]# ceph osd tree
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 70.92137 root default
-2 5.45549 host fre101
0 hdd 1.81850 osd.0 up 1.00000 1.00000
1 hdd 1.81850 osd.1 up 1.00000 1.00000
2 hdd 1.81850 osd.2 up 1.00000 1.00000
-9 5.45549 host fre103
3 hdd 1.81850 osd.3 up 1.00000 1.00000
4 hdd 1.81850 osd.4 up 1.00000 1.00000
5 hdd 1.81850 osd.5 up 1.00000 1.00000
-3 5.45549 host fre105
6 hdd 1.81850 osd.6 up 1.00000 1.00000
7 hdd 1.81850 osd.7 up 1.00000 1.00000
8 hdd 1.81850 osd.8 up 1.00000 1.00000
-4 5.45549 host fre107
9 hdd 1.81850 osd.9 up 1.00000 1.00000
10 hdd 1.81850 osd.10 up 1.00000 1.00000
11 hdd 1.81850 osd.11 up 1.00000 1.00000
-5 5.45549 host fre109
12 hdd 1.81850 osd.12 up 1.00000 1.00000
13 hdd 1.81850 osd.13 up 1.00000 1.00000
14 hdd 1.81850 osd.14 up 1.00000 1.00000
-6 5.45549 host fre111
15 hdd 1.81850 osd.15 up 1.00000 1.00000
16 hdd 1.81850 osd.16 up 1.00000 1.00000
17 hdd 1.81850 osd.17 up 0.79999 1.00000
-7 5.45549 host fre113
18 hdd 1.81850 osd.18 up 1.00000 1.00000
19 hdd 1.81850 osd.19 up 1.00000 1.00000
20 hdd 1.81850 osd.20 up 1.00000 1.00000
-8 5.45549 host fre115
21 hdd 1.81850 osd.21 up 1.00000 1.00000
22 hdd 1.81850 osd.22 up 1.00000 1.00000
23 hdd 1.81850 osd.23 up 1.00000 1.00000
-10 5.45549 host fre117
24 hdd 1.81850 osd.24 up 1.00000 1.00000
25 hdd 1.81850 osd.25 up 1.00000 1.00000
26 hdd 1.81850 osd.26 up 1.00000 1.00000
-11 5.45549 host fre119
27 hdd 1.81850 osd.27 up 1.00000 1.00000
28 hdd 1.81850 osd.28 up 1.00000 1.00000
29 hdd 1.81850 osd.29 up 1.00000 1.00000
-12 5.45549 host fre121
30 hdd 1.81850 osd.30 up 1.00000 1.00000
31 hdd 1.81850 osd.31 up 1.00000 1.00000
32 hdd 1.81850 osd.32 up 1.00000 1.00000
-13 5.45549 host fre123
33 hdd 1.81850 osd.33 up 1.00000 1.00000
34 hdd 1.81850 osd.34 up 1.00000 1.00000
35 hdd 1.81850 osd.35 up 1.00000 1.00000
-27 5.45549 host fre201
36 hdd 1.81850 osd.36 down 0 1.00000
37 hdd 1.81850 osd.37 down 0 1.00000
38 hdd 1.81850 osd.38 down 0 1.00000
Current Ceph Health:
Current Health of Ceph cluster
~ceph -s
cluster:
id: XXXXXXXXXXXXXXXX
health: HEALTH_ERR
3 pools have many more objects per pg than average
358887/12390692 objects misplaced (2.896%)
2 scrub errors
9677 PGs pending on creation
Reduced data availability: 7125 pgs inactive, 6185 pgs down, 2 pgs peering, 2709 pgs stale
Possible data damage: 2 pgs inconsistent
Degraded data redundancy: 193505/12390692 objects degraded (1.562%), 351 pgs degraded, 1303 pgs undersized
53882 slow requests are blocked > 32 sec
4082 stuck requests are blocked > 4096 sec
too many PGs per OSD (2969 > max 200)
services:
mon: 3 daemons, quorum ceph-mon01,ceph-mon02,ceph-mon03
mgr: ceph-mon03(active), standbys: ceph-mon01, ceph-mon02
osd: 39 osds: 36 up, 36 in; 51 remapped pgs
rgw: 1 daemon active
data:
pools: 18 pools, 54656 pgs
objects: 6050k objects, 10940 GB
usage: 21721 GB used, 45314 GB / 67035 GB avail
pgs: 13.036% pgs not active
193505/12390692 objects degraded (1.562%)
358887/12390692 objects misplaced (2.896%)
46177 active+clean
5070 down
1114 stale+down
1088 stale+active+undersized
547 activating
201 stale+active+undersized+degraded
173 stale+activating
96 activating+degraded
61 stale+active+clean
43 activating+remapped
39 stale+activating+degraded
24 stale+activating+remapped
9 activating+undersized+degraded+remapped
4 stale+activating+undersized+degraded+remapped
2 active+clean+inconsistent
1 stale+activating+degraded+remapped
1 stale+remapped+peering
1 active+undersized
1 stale+peering
1 stale+active+clean+remapped
1 down+remapped
1 stale+remapped
1 activating+degraded+remapped
io:
client: 967 kB/s rd, 1225 kB/s wr, 29 op/s rd, 30 op/s wr
I am not sure how to recover 7125 PGs which are present on active OSDs. Any help would be appreciated.

In luminous release of ceph. Release is enforcing maximum number of PGs as 200. In my case they were more than 3000+ so I need to set max_number_of pgs parameter in /etc/ceph/ceph.conf file of monitor and OSDs as 5000 which enabled ceph recovery.

Related

how to fix ceph warning "storage filling up"

i have a cluster ceph and in monitoring tab dashboard show me warning "storage filling up"
alertname
storage filling up
description
Mountpoint /rootfs/run on ceph2-node-03.fns will be full in less than 5 days assuming the average fill-up rate of the past 48 hours.
but all devices is free
[root#ceph2-node-01 ~]# ceph osd df
ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS
0 hdd 0.01900 1.00000 20 GiB 61 MiB 15 MiB 0 B 44 MiB 20 GiB 0.30 0.92 0 up
3 ssd 0.01900 1.00000 20 GiB 69 MiB 15 MiB 5 KiB 53 MiB 20 GiB 0.33 1.04 1 up
1 hdd 0.01900 1.00000 20 GiB 76 MiB 16 MiB 6 KiB 60 MiB 20 GiB 0.37 1.15 0 up
4 ssd 0.01900 1.00000 20 GiB 68 MiB 15 MiB 3 KiB 52 MiB 20 GiB 0.33 1.03 1 up
2 hdd 0.01900 1.00000 20 GiB 66 MiB 16 MiB 6 KiB 50 MiB 20 GiB 0.32 1.00 0 up
5 ssd 0.01900 1.00000 20 GiB 57 MiB 15 MiB 5 KiB 41 MiB 20 GiB 0.28 0.86 1 up
TOTAL 120 GiB 396 MiB 92 MiB 28 KiB 300 MiB 120 GiB 0.32
MIN/MAX VAR: 0.86/1.15 STDDEV: 0.03
what should i do to fix this warning?
this is bug or ...?

Ceph displayed size calculation

I'm currently running a Ceph cluster ( Nautilus 14.2.8-59.el8cp ) and I have questions about the sizes shown :
What exactly are the "USED" and "MAX AVAIL" columns in 'ceph df' output, and how is it calculated ?
If I mount a CephFS space on a Linux machine, why does the "size" column of "df -h" output change ? I mean, I had "48T" in size a few days ago and now I have "46T", my CephFS pool is shrinking ???
No OSDs down at all
Users cleaned their CephFS space, retrieved 11T of free space and the total raise to 48T instead of 46T, this is weird
I have 1024 PGs for the CephFS, I don't know if it's enough
I can't find documentation about the sizing calculation shown by Ceph, it's kind of a blackbox
Thank you
Ceph df
RAW STORAGE:
CLASS SIZE AVAIL USED RAW USED %RAW USED
MixedUse 680 TiB 465 TiB 214 TiB 214 TiB 31.54
ReadIntensive 204 TiB 85 TiB 119 TiB 120 TiB 58.59
TOTAL 884 TiB 550 TiB 333 TiB 334 TiB 37.79
POOLS:
POOL ID STORED OBJECTS USED %USED MAX AVAIL
glance 1 31 GiB 4.66k 93 GiB 0.03 92 TiB
cinder 2 9.4 TiB 2.56M 28 TiB 9.23 92 TiB
nova 3 62 TiB 16.25M 185 TiB 40.08 92 TiB
cinder-backup 4 0 B 0 0 B 0 92 TiB
gnocchi 5 0 B 0 0 B 0 92 TiB
cephfs_data 6 40 TiB 11.46M 119 TiB 88.02 5.4 TiB
cephfs_metadata 7 749 MiB 88.95k 1.2 GiB 0 5.4 TiB
scbench 8 28 GiB 6.76k 84 GiB 0.50 5.4 TiB
Ceph osd tree
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 884.05933 root default
-19 817.39832 datacenter DC3
-15 204.34959 rack J07
-3 68.11653 host server-01797
0 MixedUse 5.82190 osd.0 up 1.00000 1.00000
24 MixedUse 1.45549 osd.24 up 1.00000 1.00000
30 MixedUse 1.45549 osd.30 up 1.00000 1.00000
31 MixedUse 5.82190 osd.31 up 1.00000 1.00000
37 MixedUse 5.82190 osd.37 up 1.00000 1.00000
42 MixedUse 5.82190 osd.42 up 1.00000 1.00000
47 MixedUse 5.82190 osd.47 up 1.00000 1.00000
52 MixedUse 5.82190 osd.52 up 1.00000 1.00000
58 MixedUse 1.45549 osd.58 up 1.00000 1.00000
87 MixedUse 1.45549 osd.87 up 1.00000 1.00000
102 MixedUse 5.82190 osd.102 up 1.00000 1.00000
188 MixedUse 5.82190 osd.188 up 1.00000 1.00000
7 ReadIntensive 1.74660 osd.7 up 1.00000 1.00000
13 ReadIntensive 1.74660 osd.13 up 1.00000 1.00000
20 ReadIntensive 1.74660 osd.20 up 1.00000 1.00000
85 ReadIntensive 1.74660 osd.85 up 1.00000 1.00000
124 ReadIntensive 1.74660 osd.124 up 1.00000 1.00000
125 ReadIntensive 1.74660 osd.125 up 1.00000 1.00000
126 ReadIntensive 1.74660 osd.126 up 1.00000 1.00000
127 ReadIntensive 1.74660 osd.127 up 1.00000 1.00000
130 ReadIntensive 1.74660 osd.130 up 1.00000 1.00000
-5 68.11653 host server-01798
1 MixedUse 5.82190 osd.1 up 1.00000 1.00000
28 MixedUse 5.82190 osd.28 up 1.00000 1.00000
35 MixedUse 5.82190 osd.35 up 1.00000 1.00000
40 MixedUse 5.82190 osd.40 up 1.00000 1.00000
45 MixedUse 5.82190 osd.45 up 1.00000 1.00000
50 MixedUse 5.82190 osd.50 up 1.00000 1.00000
55 MixedUse 1.45549 osd.55 up 1.00000 1.00000
59 MixedUse 1.45549 osd.59 up 1.00000 1.00000
64 MixedUse 5.82190 osd.64 up 1.00000 1.00000
119 MixedUse 1.45549 osd.119 up 1.00000 1.00000
122 MixedUse 1.45549 osd.122 up 1.00000 1.00000
236 MixedUse 5.82190 osd.236 up 1.00000 1.00000
6 ReadIntensive 1.74660 osd.6 up 1.00000 1.00000
12 ReadIntensive 1.74660 osd.12 up 1.00000 1.00000
18 ReadIntensive 1.74660 osd.18 up 1.00000 1.00000
131 ReadIntensive 1.74660 osd.131 up 1.00000 1.00000
132 ReadIntensive 1.74660 osd.132 up 1.00000 1.00000
133 ReadIntensive 1.74660 osd.133 up 1.00000 1.00000
136 ReadIntensive 1.74660 osd.136 up 1.00000 1.00000
137 ReadIntensive 1.74660 osd.137 up 1.00000 1.00000
138 ReadIntensive 1.74660 osd.138 up 1.00000 1.00000
-7 68.11653 host server-01799
2 MixedUse 5.82190 osd.2 up 1.00000 1.00000
25 MixedUse 5.82190 osd.25 up 1.00000 1.00000
32 MixedUse 5.82190 osd.32 up 1.00000 1.00000
38 MixedUse 5.82190 osd.38 up 1.00000 1.00000
43 MixedUse 5.82190 osd.43 up 1.00000 1.00000
48 MixedUse 5.82190 osd.48 up 1.00000 1.00000
57 MixedUse 1.45549 osd.57 up 1.00000 1.00000
61 MixedUse 1.45549 osd.61 up 1.00000 1.00000
101 MixedUse 5.82190 osd.101 up 1.00000 1.00000
107 MixedUse 1.45549 osd.107 up 1.00000 1.00000
110 MixedUse 1.45549 osd.110 up 1.00000 1.00000
212 MixedUse 5.82190 osd.212 up 1.00000 1.00000
9 ReadIntensive 1.74660 osd.9 up 1.00000 1.00000
15 ReadIntensive 1.74660 osd.15 up 1.00000 1.00000
21 ReadIntensive 1.74660 osd.21 up 1.00000 1.00000
153 ReadIntensive 1.74660 osd.153 up 1.00000 1.00000
154 ReadIntensive 1.74660 osd.154 up 1.00000 1.00000
155 ReadIntensive 1.74660 osd.155 up 1.00000 1.00000
156 ReadIntensive 1.74660 osd.156 up 1.00000 1.00000
157 ReadIntensive 1.74660 osd.157 up 1.00000 1.00000
158 ReadIntensive 1.74660 osd.158 up 1.00000 1.00000
-37 204.34953 rack J08
-31 68.11647 host server-06076
93 MixedUse 5.82190 osd.93 up 1.00000 1.00000
95 MixedUse 5.82190 osd.95 up 1.00000 1.00000
97 MixedUse 5.82190 osd.97 up 1.00000 1.00000
99 MixedUse 5.82190 osd.99 up 1.00000 1.00000
103 MixedUse 5.82190 osd.103 up 1.00000 1.00000
144 MixedUse 5.82190 osd.144 up 1.00000 1.00000
162 MixedUse 1.45547 osd.162 up 1.00000 1.00000
163 MixedUse 1.45547 osd.163 up 1.00000 1.00000
165 MixedUse 1.45547 osd.165 up 1.00000 1.00000
166 MixedUse 1.45547 osd.166 up 1.00000 1.00000
172 MixedUse 5.82190 osd.172 up 1.00000 1.00000
284 MixedUse 5.82190 osd.284 up 1.00000 1.00000
139 ReadIntensive 1.74660 osd.139 up 1.00000 1.00000
143 ReadIntensive 1.74660 osd.143 up 1.00000 1.00000
145 ReadIntensive 1.74660 osd.145 up 1.00000 1.00000
146 ReadIntensive 1.74660 osd.146 up 1.00000 1.00000
147 ReadIntensive 1.74660 osd.147 up 1.00000 1.00000
149 ReadIntensive 1.74660 osd.149 up 1.00000 1.00000
150 ReadIntensive 1.74660 osd.150 up 1.00000 1.00000
151 ReadIntensive 1.74660 osd.151 up 1.00000 1.00000
152 ReadIntensive 1.74660 osd.152 up 1.00000 1.00000
-34 68.11653 host server-06077
53 MixedUse 5.82190 osd.53 up 1.00000 1.00000
71 MixedUse 5.82190 osd.71 up 1.00000 1.00000
76 MixedUse 5.82190 osd.76 up 1.00000 1.00000
84 MixedUse 5.82190 osd.84 up 1.00000 1.00000
89 MixedUse 5.82190 osd.89 up 1.00000 1.00000
121 MixedUse 5.82190 osd.121 up 1.00000 1.00000
148 MixedUse 5.82190 osd.148 up 1.00000 1.00000
168 MixedUse 5.82190 osd.168 up 1.00000 1.00000
186 MixedUse 1.45549 osd.186 up 1.00000 1.00000
187 MixedUse 1.45549 osd.187 up 1.00000 1.00000
189 MixedUse 1.45549 osd.189 up 1.00000 1.00000
190 MixedUse 1.45549 osd.190 up 1.00000 1.00000
169 ReadIntensive 1.74660 osd.169 up 1.00000 1.00000
170 ReadIntensive 1.74660 osd.170 up 1.00000 1.00000
171 ReadIntensive 1.74660 osd.171 up 1.00000 1.00000
232 ReadIntensive 1.74660 osd.232 up 1.00000 1.00000
233 ReadIntensive 1.74660 osd.233 up 1.00000 1.00000
239 ReadIntensive 1.74660 osd.239 up 1.00000 1.00000
245 ReadIntensive 1.74660 osd.245 up 1.00000 1.00000
246 ReadIntensive 1.74660 osd.246 up 1.00000 1.00000
247 ReadIntensive 1.74660 osd.247 up 1.00000 1.00000
-40 68.11653 host server-06078
63 MixedUse 5.82190 osd.63 up 1.00000 1.00000
72 MixedUse 5.82190 osd.72 up 1.00000 1.00000
78 MixedUse 5.82190 osd.78 up 1.00000 1.00000
88 MixedUse 5.82190 osd.88 up 1.00000 1.00000
91 MixedUse 5.82190 osd.91 up 1.00000 1.00000
140 MixedUse 5.82190 osd.140 up 1.00000 1.00000
192 MixedUse 5.82190 osd.192 up 1.00000 1.00000
196 MixedUse 5.82190 osd.196 up 1.00000 1.00000
210 MixedUse 1.45549 osd.210 up 1.00000 1.00000
211 MixedUse 1.45549 osd.211 up 1.00000 1.00000
213 MixedUse 1.45549 osd.213 up 1.00000 1.00000
214 MixedUse 1.45549 osd.214 up 1.00000 1.00000
193 ReadIntensive 1.74660 osd.193 up 1.00000 1.00000
194 ReadIntensive 1.74660 osd.194 up 1.00000 1.00000
195 ReadIntensive 1.74660 osd.195 up 1.00000 1.00000
248 ReadIntensive 1.74660 osd.248 up 1.00000 1.00000
249 ReadIntensive 1.74660 osd.249 up 1.00000 1.00000
250 ReadIntensive 1.74660 osd.250 up 1.00000 1.00000
251 ReadIntensive 1.74660 osd.251 up 1.00000 1.00000
252 ReadIntensive 1.74660 osd.252 up 1.00000 1.00000
253 ReadIntensive 1.74660 osd.253 up 1.00000 1.00000
-16 204.34959 rack K07
-9 68.11653 host server-01800
3 MixedUse 5.82190 osd.3 up 1.00000 1.00000
34 MixedUse 5.82190 osd.34 up 1.00000 1.00000
41 MixedUse 5.82190 osd.41 up 1.00000 1.00000
49 MixedUse 5.82190 osd.49 up 1.00000 1.00000
62 MixedUse 5.82190 osd.62 up 1.00000 1.00000
65 MixedUse 5.82190 osd.65 up 1.00000 1.00000
66 MixedUse 5.82190 osd.66 up 1.00000 1.00000
67 MixedUse 1.45549 osd.67 up 1.00000 1.00000
68 MixedUse 1.45549 osd.68 up 1.00000 1.00000
69 MixedUse 5.82190 osd.69 up 1.00000 1.00000
70 MixedUse 1.45549 osd.70 up 1.00000 1.00000
73 MixedUse 1.45549 osd.73 up 1.00000 1.00000
22 ReadIntensive 1.74660 osd.22 up 1.00000 1.00000
29 ReadIntensive 1.74660 osd.29 up 1.00000 1.00000
33 ReadIntensive 1.74660 osd.33 up 1.00000 1.00000
159 ReadIntensive 1.74660 osd.159 up 1.00000 1.00000
160 ReadIntensive 1.74660 osd.160 up 1.00000 1.00000
161 ReadIntensive 1.74660 osd.161 up 1.00000 1.00000
167 ReadIntensive 1.74660 osd.167 up 1.00000 1.00000
173 ReadIntensive 1.74660 osd.173 up 1.00000 1.00000
174 ReadIntensive 1.74660 osd.174 up 1.00000 1.00000
-11 68.11653 host server-01801
4 MixedUse 5.82190 osd.4 up 1.00000 1.00000
10 MixedUse 5.82190 osd.10 up 1.00000 1.00000
26 MixedUse 5.82190 osd.26 up 1.00000 1.00000
36 MixedUse 5.82190 osd.36 up 1.00000 1.00000
46 MixedUse 5.82190 osd.46 up 1.00000 1.00000
56 MixedUse 5.82190 osd.56 up 1.00000 1.00000
116 MixedUse 5.82190 osd.116 up 1.00000 1.00000
128 MixedUse 5.82190 osd.128 up 1.00000 1.00000
134 MixedUse 1.45549 osd.134 up 1.00000 1.00000
135 MixedUse 1.45549 osd.135 up 1.00000 1.00000
141 MixedUse 1.45549 osd.141 up 1.00000 1.00000
142 MixedUse 1.45549 osd.142 up 1.00000 1.00000
8 ReadIntensive 1.74660 osd.8 up 1.00000 1.00000
14 ReadIntensive 1.74660 osd.14 up 1.00000 1.00000
19 ReadIntensive 1.74660 osd.19 up 1.00000 1.00000
175 ReadIntensive 1.74660 osd.175 up 1.00000 1.00000
176 ReadIntensive 1.74660 osd.176 up 1.00000 1.00000
177 ReadIntensive 1.74660 osd.177 up 1.00000 1.00000
178 ReadIntensive 1.74660 osd.178 up 1.00000 1.00000
179 ReadIntensive 1.74660 osd.179 up 1.00000 1.00000
180 ReadIntensive 1.74660 osd.180 up 1.00000 1.00000
-13 68.11653 host server-01802
5 MixedUse 5.82190 osd.5 up 1.00000 1.00000
16 MixedUse 5.82190 osd.16 up 1.00000 1.00000
27 MixedUse 5.82190 osd.27 up 1.00000 1.00000
39 MixedUse 5.82190 osd.39 up 1.00000 1.00000
44 MixedUse 5.82190 osd.44 up 1.00000 1.00000
51 MixedUse 5.82190 osd.51 up 1.00000 1.00000
54 MixedUse 1.45549 osd.54 up 1.00000 1.00000
60 MixedUse 1.45549 osd.60 up 1.00000 1.00000
94 MixedUse 1.45549 osd.94 up 1.00000 1.00000
96 MixedUse 1.45549 osd.96 up 1.00000 1.00000
129 MixedUse 5.82190 osd.129 up 1.00000 1.00000
260 MixedUse 5.82190 osd.260 up 1.00000 1.00000
11 ReadIntensive 1.74660 osd.11 up 1.00000 1.00000
17 ReadIntensive 1.74660 osd.17 up 1.00000 1.00000
23 ReadIntensive 1.74660 osd.23 up 1.00000 1.00000
181 ReadIntensive 1.74660 osd.181 up 1.00000 1.00000
182 ReadIntensive 1.74660 osd.182 up 1.00000 1.00000
183 ReadIntensive 1.74660 osd.183 up 1.00000 1.00000
184 ReadIntensive 1.74660 osd.184 up 1.00000 1.00000
185 ReadIntensive 1.74660 osd.185 up 1.00000 1.00000
191 ReadIntensive 1.74660 osd.191 up 1.00000 1.00000
-43 204.34959 rack K08
-46 68.11653 host server-06079
75 MixedUse 5.82190 osd.75 up 1.00000 1.00000
82 MixedUse 5.82190 osd.82 up 1.00000 1.00000
86 MixedUse 5.82190 osd.86 up 1.00000 1.00000
92 MixedUse 5.82190 osd.92 up 1.00000 1.00000
106 MixedUse 5.82190 osd.106 up 1.00000 1.00000
111 MixedUse 5.82190 osd.111 up 1.00000 1.00000
216 MixedUse 5.82190 osd.216 up 1.00000 1.00000
220 MixedUse 5.82190 osd.220 up 1.00000 1.00000
234 MixedUse 1.45549 osd.234 up 1.00000 1.00000
235 MixedUse 1.45549 osd.235 up 1.00000 1.00000
237 MixedUse 1.45549 osd.237 up 1.00000 1.00000
238 MixedUse 1.45549 osd.238 up 1.00000 1.00000
209 ReadIntensive 1.74660 osd.209 up 1.00000 1.00000
215 ReadIntensive 1.74660 osd.215 up 1.00000 1.00000
217 ReadIntensive 1.74660 osd.217 up 1.00000 1.00000
218 ReadIntensive 1.74660 osd.218 up 1.00000 1.00000
219 ReadIntensive 1.74660 osd.219 up 1.00000 1.00000
221 ReadIntensive 1.74660 osd.221 up 1.00000 1.00000
222 ReadIntensive 1.74660 osd.222 up 1.00000 1.00000
223 ReadIntensive 1.74660 osd.223 up 1.00000 1.00000
224 ReadIntensive 1.74660 osd.224 up 1.00000 1.00000
-52 68.11653 host server-06080
79 MixedUse 5.82190 osd.79 up 1.00000 1.00000
83 MixedUse 5.82190 osd.83 up 1.00000 1.00000
98 MixedUse 5.82190 osd.98 up 1.00000 1.00000
108 MixedUse 5.82190 osd.108 up 1.00000 1.00000
113 MixedUse 5.82190 osd.113 up 1.00000 1.00000
164 MixedUse 5.82190 osd.164 up 1.00000 1.00000
240 MixedUse 5.82190 osd.240 up 1.00000 1.00000
258 MixedUse 1.45549 osd.258 up 1.00000 1.00000
259 MixedUse 1.45549 osd.259 up 1.00000 1.00000
261 MixedUse 1.45549 osd.261 up 1.00000 1.00000
262 MixedUse 1.45549 osd.262 up 1.00000 1.00000
268 MixedUse 5.82190 osd.268 up 1.00000 1.00000
203 ReadIntensive 1.74660 osd.203 up 1.00000 1.00000
204 ReadIntensive 1.74660 osd.204 up 1.00000 1.00000
205 ReadIntensive 1.74660 osd.205 up 1.00000 1.00000
206 ReadIntensive 1.74660 osd.206 up 1.00000 1.00000
207 ReadIntensive 1.74660 osd.207 up 1.00000 1.00000
208 ReadIntensive 1.74660 osd.208 up 1.00000 1.00000
241 ReadIntensive 1.74660 osd.241 up 1.00000 1.00000
242 ReadIntensive 1.74660 osd.242 up 1.00000 1.00000
243 ReadIntensive 1.74660 osd.243 up 1.00000 1.00000
-49 68.11653 host server-06081
77 MixedUse 5.82190 osd.77 up 1.00000 1.00000
81 MixedUse 5.82190 osd.81 up 1.00000 1.00000
90 MixedUse 5.82190 osd.90 up 1.00000 1.00000
104 MixedUse 5.82190 osd.104 up 1.00000 1.00000
105 MixedUse 5.82190 osd.105 up 1.00000 1.00000
112 MixedUse 5.82190 osd.112 up 1.00000 1.00000
244 MixedUse 5.82190 osd.244 up 1.00000 1.00000
264 MixedUse 5.82190 osd.264 up 1.00000 1.00000
282 MixedUse 1.45549 osd.282 up 1.00000 1.00000
283 MixedUse 1.45549 osd.283 up 1.00000 1.00000
285 MixedUse 1.45549 osd.285 up 1.00000 1.00000
286 MixedUse 1.45549 osd.286 up 1.00000 1.00000
197 ReadIntensive 1.74660 osd.197 up 1.00000 1.00000
198 ReadIntensive 1.74660 osd.198 up 1.00000 1.00000
199 ReadIntensive 1.74660 osd.199 up 1.00000 1.00000
200 ReadIntensive 1.74660 osd.200 up 1.00000 1.00000
201 ReadIntensive 1.74660 osd.201 up 1.00000 1.00000
202 ReadIntensive 1.74660 osd.202 up 1.00000 1.00000
265 ReadIntensive 1.74660 osd.265 up 1.00000 1.00000
266 ReadIntensive 1.74660 osd.266 up 1.00000 1.00000
267 ReadIntensive 1.74660 osd.267 up 1.00000 1.00000
-61 66.66104 datacenter DC4
-58 66.66104 rack N21
-55 66.66104 host server-06694
74 MixedUse 5.82190 osd.74 up 1.00000 1.00000
80 MixedUse 5.82190 osd.80 up 1.00000 1.00000
100 MixedUse 5.82190 osd.100 up 1.00000 1.00000
109 MixedUse 5.82190 osd.109 up 1.00000 1.00000
115 MixedUse 5.82190 osd.115 up 1.00000 1.00000
117 MixedUse 5.82190 osd.117 up 1.00000 1.00000
118 MixedUse 5.82190 osd.118 up 1.00000 1.00000
123 MixedUse 5.82190 osd.123 up 1.00000 1.00000
288 MixedUse 1.45549 osd.288 up 1.00000 1.00000
289 MixedUse 1.45549 osd.289 up 1.00000 1.00000
290 MixedUse 1.45549 osd.290 up 1.00000 1.00000
114 ReadIntensive 1.74660 osd.114 up 1.00000 1.00000
120 ReadIntensive 1.74660 osd.120 up 1.00000 1.00000
225 ReadIntensive 1.74660 osd.225 up 1.00000 1.00000
226 ReadIntensive 1.74660 osd.226 up 1.00000 1.00000
227 ReadIntensive 1.74660 osd.227 up 1.00000 1.00000
228 ReadIntensive 1.74660 osd.228 up 1.00000 1.00000
229 ReadIntensive 1.74660 osd.229 up 1.00000 1.00000
230 ReadIntensive 1.74660 osd.230 up 1.00000 1.00000
231 ReadIntensive 1.74660 osd.231 up 1.00000 1.00000
These questions have been asked many times on the ceph-users mailing list. I would recommend to search the archives for a more detailed explanation. But to briefly answer your questions:
What exactly are the "USED" and "MAX AVAIL" columns in 'ceph df'
output, and how is it calculated ?
"Used" is what it says, the raw storage the pool is using (including replication). You seem to be using an older ceph version, can you confirm? In your case "used" is three times the "stored" value, so you're probably using a replicated pool of size 3. The "max avail" value is an estimation of ceph based on several criteria like the fullest OSD, the crush device class etc. It tries to predict how much free space you have in your cluster, this prediction varies depending on how fast pools are getting full.
If I mount a CephFS space on a Linux machine, why does the "size"
column of "df -h" output change ? I mean, I had "48T" in size a few
days ago and now I have "46T", my CephFS pool is shrinking ???
You probably had down OSDs, I would assume. The size of your cluster is calculated by taking into account every single OSD, if one fails or is down the cluster size will shrink.

least squares with seasonal component in matlab

I was reading a paper which looked at investigating trends in monthly wind speed data for the past 20 years or so. The paper uses a number of different statistical approaches, which I am trying to replicate here.
The first method used is a simple linear regression model of the form
$$ y(t) = a_{1}t + b_{1} $$
where $a_{1}$ and $b_{1}$ can be determined by standard least squares.
Then they specify that some of the potential error in the linear regression model can be removed explicitly by accounting for the seasonal signal by fitting a model of the form:
$$ y(t) = a_{2}t + b_{2}\sin\left(\frac{2\pi}{12t} + c_{2}\right) + d_{2}$$
where coefficients $a_{2}$, $b_{2}$, $c_{2}$, and $d_{2}$ can be determined by least squares. They then go on to specify that this model was also tested with additional harmonic components of 3, 4, and 6 months.
Using the following data as an example:
% 1949 1950 1951 1952 1953 1954 1955 1956 1957 1958 1959 1960
y = [112 115 145 171 196 204 242 284 315 340 360 417 % Jan
118 126 150 180 196 188 233 277 301 318 342 391 % Feb
132 141 178 193 236 235 267 317 356 362 406 419 % Mar
129 135 163 181 235 227 269 313 348 348 396 461 % Apr
121 125 172 183 229 234 270 318 355 363 420 472 % May
135 149 178 218 243 264 315 374 422 435 472 535 % Jun
148 170 199 230 264 302 364 413 465 491 548 622 % Jul
148 170 199 242 272 293 347 405 467 505 559 606 % Aug
136 158 184 209 237 259 312 355 404 404 463 508 % Sep
119 133 162 191 211 229 274 306 347 359 407 461 % Oct
104 114 146 172 180 203 237 271 305 310 362 390 % Nov
118 140 166 194 201 229 278 306 336 337 405 432 ]; % Dec
time = datestr(datenum(yr(:),mo(:),1));
jday = datenum(time,'dd-mmm-yyyy');
y2 = reshape(y,[],1);
plot(jday,y2)
Can anyone demonstrate how the model above can be written in matlab?
Notice that your model is actually linear, we can use a trigonometric identity to show that. To use a nonlinear model use nlinfit.
Using your data I wrote the following script to compute and compare the different methods:
(you can comment out the opts.RobustWgtFun = 'bisquare'; line to see that it's exactly like the linear fit with the 12 periodicity)
% y = [112 115 ...
y2 = reshape(y,[],1);
t=(1:144).';
% trend
T = [ones(size(t)) t];
B=T\y2;
y_trend = T*B;
% least squeare, using linear fit and the 12 periodicity only
T = [ones(size(t)) t sin(2*pi*t/12) cos(2*pi*t/12)];
B=T\y2;
y_sincos = T*B;
% least squeare, using linear fit and 3,4,6,12 periodicities
addharmonics = [3 4 6];
T = [T bsxfun(#(h,t)sin(2*pi*t/h),addharmonics,t) bsxfun(#(h,t)cos(2*pi*t/h),addharmonics,t)];
B=T\y2;
y_sincos2 = T*B;
% least squeare with bisquare weights,
% using non-linear model of a linear fit and the 12 periodicity only
opts = statset('nlinfit');
opts.RobustWgtFun = 'bisquare';
b0 = [1;1;0;1];
modelfun = #(b,x) b(1)*x+b(2)*sin((b(3)+x)*2*pi/12)+b(4);
b = nlinfit(t,y2,modelfun,b0,opts);
% plot a comparison
figure
plot(t,y2,t,y_trend,t,modelfun(b,t),t,y_sincos,t,y_sincos2)
legend('Original','Trend','bisquare weight - 12 periodicity only', ...
'least square - 12 periodicity only','least square - 3,4,6,12 periodicities', ...
'Location','NorthWest');
xlim(minmax(t'));

How two reshape two columns into multiple columns in matlab

Does anyone know how I can reshape these two columns:
1 1
1 1
1 1
379 346
352 363
330 371
309 379
291 391
271 402
268 403
1 1
1 1
406 318
379 334
351 351
329 359
307 367
287 378
267 390
264 391
into these four columns:
1 1 1 1
1 1 1 1
1 1 406 318
379 346 379 304
352 363 351 351
330 371 329 359
309 379 307 367
291 391 287 378
271 402 267 390
268 403 264 391
That is, how to reshape a matrix that is the size of Nx2 into a size 10xM in matlab?
One solution using mat2cell, splitting every 10 rows. Probably easier to understand, because no 3d-matrices are used:
cell2mat(mat2cell(x,repmat(10,size(x,1)/10,1),size(x,2))')
Second solution using reshape and permute, should be faster but I did not try it.:
reshape(permute(reshape(x,10,[],size(x,2)),[1,3,2]),10,[])

Multidimensional scaling matrix error

I'm trying to use multidimensional scaling in Matlab. The goal is to convert a similarity matrix to scatter plot (in order to use k-means).
I've got the following test set:
London Stockholm Lisboa Madrid Paris Amsterdam Berlin Prague Rome Dublin
0 569 667 530 141 140 357 396 570 190
569 0 1212 1043 617 446 325 423 787 648
667 1212 0 201 596 768 923 882 714 714
530 1043 201 0 431 608 740 690 516 622
141 617 596 431 0 177 340 337 436 320
140 446 768 608 177 0 218 272 519 302
357 325 923 740 340 218 0 114 472 514
396 423 882 690 337 272 114 0 364 573
569 787 714 516 436 519 472 364 0 755
190 648 714 622 320 302 514 573 755 0
I got this dataset from the book Modern Multidimensional Scaling (Borg & Groenen, 2005). Tested it in SPSS using the PROXSCAL MDS method and I get the same result as stated in the book.
But I need to use MDS in Matlab in order to speed up the process. The tutorial on the site: http://www.mathworks.nl/help/stats/multidimensional-scaling.html#briu08r-4 looks the same as what I'm using above. When I change the data set as what is displayed above and run the code I get the following error: "Not a valid dissimilarity or distance matrix.".
I'm not sure what I'm doing wrong, and if classical MDS is the right choice. I also miss the possibility to say that I want the result in three dimensions (this will be needed in a later stage).
Your matrix is not symetric, check the indices (9,1) and (1,9). To quickly find asymetric indices use [x,y]=find(~(D'==D))