Cassandra not replicating between two servers - nosql

Are there any online tutorials on how to setup two basic servers up and running and replicating data.
I have some data in one server, and I have connected another, created the keyspace on the second server, but nothing is happening, what am I missing.
How do you ensure other people can't connect to port 7000. Can you use ssl certificates?

Related

High-Availability of Keycloak across remote sites

I’ve been looking into Keycloak as an on-prem IAM and SSO solution for my company. One thing that I’m unclear on from reading the documentation is if Keycloak’s clustered mode can handle our requirements for instance federation across sites.
We have some remote manned sites that occasionally run critical telemetry-gathering processes. Our AD domain is replicated to those sites.
The issue is that there is a single internet link to the sites. If we had keycloak at the main office, and the internet link went down for a day, any software at the remote site that relies on keycloak to authenticate wouldn’t work (which would be a big problem).
Can we set up Keycloak in a cluster mode (ie, putting an instance at each site), so that if this link went out, remote users are able to connect to their local instance automatically and authenticate with local apps? What happens when the connection is restored and the databases are out of sync - does keycloak automatically repair this?
Cheers
In general answer is "yes", you can setup two keycloak instances in different locations, and link them with each other via cluster (under the hood it would be infinispan cache replication). But it depends on details of your infrastructure.
Main goal of Keycloak cluster is to perform sessions cache replication between nodes. So in simplest case you can setup two nodes that looks to same DB instance, and when first node goes down second would handle whole job, but if DB also goes down second node would be useless. In such case each site should have both separate Keycloak node and DB replica (how to achieve DB replication is out of scope of this topic). Third option is to use multitenancy feature of keycloak application adapter, in that case you secure application by two separate Keycloak instances, that know nothing about each other.
Try to start from this documentation article:
https://www.keycloak.org/docs/latest/server_installation/index.html#crossdc-mode

Kafka Connect Distributed Mode SSL Client Auth

I’m trying to use ssl.client.auth=required within Kubernetes for some last mile protection.
This approach works fine for a single worker node, but when scaled up to multiple workers, SSL problems occur due to the workers not sending their own client certs between themselves.
Is this approach possible or will I need to take a different approach.
Thanks!

Can pgpool-II balance PostgreSQL servers with multiple instances each?

I have configured two PostgreSQL 9.5 instances in master-slave mode and could successfully configure PgPool-II for load balancing between them, and failover is just working fine (took me 2 weeks and lots of errors, but it finally worked).
My question now is: imagine that one PostgreSQL server has multiple instances, and the second server also has multiple instances, each instance paired between the two servers in many ports, each port for each instance (of course), and each of these instances configured for master-slave replication. Is it possible to configure load balance and failover for all these instances with only one PgPool installation, or should I configure one PgPool for each PostgreSQL instance?
Thanks in advance,
Igor Felix
Since I have not found anything at Google, no one answered anything, and also nothing was found in the source code, I answer this question as not possible, or one pgpool for each postgres cluster.

Distributed nodes to sync data without a single point of failure

Recently I've been reading about methods to horizontally scale a server.
The majority of methods, if not all, require a central server to sync the data (Redis is widely used here). That means that multiple nodes rely on one central server, one point of failure, to have data synced between them.
I know that I can have multiple instances of Redis and those nodes can connect to the backup Redis servers and keep syncing the data just in case the main Redis server fails, but I dont like the idea to have "2 layers" (Server layer and Redis layer) to achieve this. Is there any piece of software, or method, to have multiple nodes be synced between them only being connected between them?
I mean, like having the 3 Redis servers installed in the same 3 Server machines. If one servers goes down, it goes down along with the Redis server, so the other 2 Server machines connect to the fallback Redis server installed on their own machines, but instead of this hackish thingy, have a piece of software that simply connect between the nodes, sync the data and such.
I've been thinking on a system, but one problem arises. A problem that can be extended to the "having multiple Redis instances" thing.
Imagine that I have a system with X Redis servers in different datacenters, and also have X servers in different datacenters too. I don't want them in the same datacenter to avoid single point of failures. Now, let's say that half of the servers lose the connection to the other half, not because of machines going down, but because of connection failures (ISP problem and such). The slaves are going to think that the master Redis server is dead because the connection dropped and they cannot reconnect, and the same with part of the servers that also lost the connection, that are going to think that the master Redis is dead, and connect and sync data using the slaves.
Now we have an scenario of one master Redis server with X servers syncing data between them and some slave Redis servers with other X servers syncing data between them.
With this, all the distributed system to not have a single point of failure is a failure by itself, because still has a single point of failure that, instead of make people lose data, will make the system have unsynced garbage data.
And as it will be realtime data, having a "save timestamps a resync data when connection recovers in a few minutes" is not a solution.
What do you think? Any solution?

How the primary server down will be handled automatically in mongodb replication

I never have my hands on coding. I got a doubt regarding mongodb replica sets
below is the situation
I have an alert monitoring application.
It is using mongodb with replica set with 3 nodes.
Applications Java code base keep connecting to the primary and doing some transactions.
Now my question is that,
if the primary server is down, how will it effect the application server.
I mean, would the app server writes error saying connection failed like errors.
OR
the replica set will pick one of the slaves automatically as master and provides the application server to do its activity. How will it happen...?
Thanks & Regards,
UDAY
The replica set will try to pick another server as the new primary. If you have three nodes, and one goes down, the other two will negotiate which one becomes the new master. If two go down, or somehow communication between the remaining breaks down, there will be no new master until the situation is recovered.
The official drivers support this automatic fail-over, as does the mongos routing server if you use it. So the application code does not need to do anything here.
I am not sure if there will be connection errors during the brief period of time this fail-over negotiation takes (you will probably get errors for a few seconds).