Is it safe to run Ceph with 2-way replication on 3 OSD nodes? - ceph

Let say I want to achieve maximum useable capacity with data resilience on this 3 OSD nodes setup where each node contains 2x 1TB OSDs.
Is it safe run 3 Ceph nodes with 2-way replication?
What are the pros and cons of using 2-way? Will it cause data split-brain?
Last but not least, what domain fault tolerance will it be running on 2-way replication?
Thanks!

Sometimes, even three replica is not enough, e.g. if ssd disks (from cache) fail together or one by one.
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2015-October/005672.html
For two osd you can even set manually 1 replica for minimum and 2 replicas for maximum (I didn't managed to set it automatically in the case of one failed osd of all three osds):
osd pool default size = 2 # Write an object 2 times
osd pool default min size = 1 # Allow writing 1 copy in a degraded state
But this command: ceph osd pool set mypoolname set min_size 1
sets it for a pool, not just the default settings.
For n = 4 nodes each with 1 osd and 1 mon and settings of replica min_size 1 and size 4 three osd can fail, only one mon can fail (the monitor quorum means more than half will survive). 4 + 1 number of monitors is required for two failed monitors (at least one should be external without osd). For 8 monitors (four external monitors) three mon can fail, so even three nodes each with 1 osd and 1 mon can fail. I am not sure that setting of 8 monitors is possible.
Thus, for three nodes each with one monitor and osd the only reasonable settings are replica min_size 2 and size 3 or 2. Only one node can fail.
If you have an external monitors, if you set min_size to 1 (this is very dangerous) and size to 2 or 1 the 2 nodes can be down. But with one replica (no copy, only original data) you can loose your job very soon.

Related

one server in mongodb sharded cluster is having high cpu

We have five sharded cluster in these 1 instance is getting high cpu.
when we check the cpu of 5 shards remaing 4 instances will be having 20% but in 1 instance it will reach more than 90%
how can i decrease the load on 1 instance?

Can't allocate more than 1 core to a container

Having an issue allocating more than 1 cpu to a pod that is running code that requires more processing power.
I have set my limit for a container to 3 cpu's
and have set set the container to request 2 cpu;s with a limit of 3
But when running the container it does not go over 1000Mi of 1 cpu.
There is very little running during this process and keda will start new nodes if needed.
How can i assign more cpu power to this container?
UPDATE
So i changed the Default Limit as suggested by moonkotte but i can only ever get just over 1 cpu
New Nodes are coming online when more containers are required, through Keda.
each node has 4 cpu, so sufficient resources
this is the details of each node. in this it is running one of the containers in question
It just isn't using all the cpu allocated

MongoDB Sharding and Replication

I've already setup MongoDB sharding and now I need to setup replication for availability. How do I do this? I've currently got this:
2 mongos instances running in different datacenters
2 mongod config servers running in different datacenters
2 mongod shard servers running in different datacenters
all communication is over a private network setup by my provider that is available cross-datacenter
Do I just setup replication on each server (by assigning each a secondary)?
I would build the whole system in 3 DCs', for redundancy.
Every data center would have three servers with services of:
1x mongoS at Server1
1x node of config server replica set at Server1
1x node of shard1 replica set at Server2
1x node of shard2 replica set at Server3
So, a total of 9 nodes (physical or virtual).
If we "lose" one DC, everything works still, because we have a majority in all three replica sets.
You need 3 servers in each replica set for redundancy. Either put the third one in one of the data centers or get a third data center.
The config replica set needs 3 servers.
Each of the shard replica sets needs 3 servers.
You can keep the 2 mongoses.
After reading through the suggestions from D. SM and JJussi (thanks by the way), I'll be implementing the following infrastructure:
3 monogs instances spaced across different datacenters
3 config servers spaced across different datacenters
2 shards with 2 storage servers spaced across different datacenters with an arbiter each (to cut down on costs for now) each
Thanks once again for your input.

Artemis 2.6.0 three node cluster

i want to build a 3 nodes (avoid split brain) symmetric cluster with high availability using replication. In addition I would like to be able to load balanced messages between nodes
how should this be achieved?
option 1: 1 master with 2 slaves
option 2: 3 colocated master/slave
nodes
Option 1 isn't really an option as the slaves will not participate in the voting process which means split-brain will not be mitigated. The only option you have left (of the 2 you listed, of course) is to use 3 colocated master/slaves.

Configuring replica set in a multi-data center

We have the following multi data-center Scenario
Node1 --- Node3
| |
| |
| |
--- ---
Node2 Node4
Node1 and Node3 form a Replica (sort of) Set ( for high availability )
Node 2/Node 4 are Priority 0 members (They should never become Primaries - Solely for read purpose)
Caveat -- what is the best way to design such a situation, since Node 2 and Node4 are not accessible to one another, given the way we configured our VPN/Firewalls;
essentially ruling out any heartbeat between Node2 and Node4.
Thanks Much
Here's what I got in mind:
Don't keep even members in a set. Thus you need another arbiter or set one of node2/4 to non-voting member.
As I'm using C# driver, I'm not sure you are using the same technology to build your application. Anyway, it turns out C# driver obtain a complete available server list from seeds (servers you provided in connection string) and tries to load-balancing requests to all of them. In your situation, I guess you would have application servers running in all 3 data centers. However, you probably don't want, for example, node 1 to accept connections from a different data center. That would significantly slow down the application. So you need some further settings:
Set node 3/4 to hidden nodes.
For applications running in the same data center with node 3/4, don't config the replicaSet parameter in connection string. But config the readPreference=secondary. If you need to write, you'll have to config another connection string to primary node.
If you make the votes of 2 and 4 also 0 then it should act, in failover as though 1 and 2 are only eligible. If you set them to hidden you have to forceably connect to them, MongoDB drivers will intentionally avoid them normally.
Other than that node 2 and 4 have direct access to whatever would be the primary as such I see no other problem.