How to read from redis replica with go-redis - kubernetes

We have a go lang service which will go to redis, to fetch data for each request and we want to read data from redis slave node as well. We went through the documentation of redis and go-redis library and found that, in order to read data from redis slave we should fire readonly command from redis side. We are using ClusterOptions on go-redis library to setup a readonly connection to redis.
redis.NewClusterClient(&redis.ClusterOptions{
Addrs: []string{redisAddress},
Password: "",
ReadOnly: true,
})
After doing all this we are able to see (Using monitoring) that read requests are handled by master nodes only. I hope this is not expected and I am missing something or doing it wrong. Any pointers will be appreciated to solve this problem.
Some more context:
redisAddress in above code is single kubernetes cluster IP. Redis is deployed using kubernetes operator with 3 masters and 1 replica for each master.

I`ve done it setting the option RouteRandomly: true

Related

Zalando operator- load balance read-write pgbouncer

I have installed Postgres cluster using zalando operator.
Also enabled pgbouncer for replicas and master.
But I would like to combine or load balance replicase and master connections,
So that read requests can be routed to read replicas and write requests can be routed to master.
Can anyone help me out in achieving this.
Thanks in advance.
Tried enabling pgbouncer.
pgbouncer is getting enabled either to master or to slave.
But I need a single point where it can route read requests to slaves and write requests to master.
There is no safe way to distinguish reading and writing statements in PostgreSQL. pgPool tries to do that, but I think any such solution is flaky. You will have to teach your application to direct reads and writes to different data sources.
I don't think Pgbouncer provides any out of the box way to load balance read and write queries. An alternative to that is the use of pgpool as a connection pooler. Pgpool provides a mode known as load_balance_mode which you can turn it on and it will try to load balance queries and send write queries to master and read queries to replica. You can read more about the load_balance_mode here

Configure Spring Data Redis to perform all operations via Elasticache configuration endpoint?

Description
Is it possible for Spring Data Redis to use Elasticache's configuration endpoint to perform all cluster operations (i.e., reading, writing, etc.)?
Long Description
I have a Spring Boot application that uses a Redis cluster as data store. The Redis cluster is hosted on AWS Elasticache running in cluster-mode enabled. The Elasticache cluster has 3 shards spread out over 12 nodes. The Redis version that the cluster is running is 6.0.
The service isn't correctly writing or retrieving data from the cluster. Whenever performing any of these operations, I get a message similar to the following:
io.lettuce.core.RedisCommandExecutionException: MOVED 16211 10.0.7.254:6379
In searching the internet, it appears that the service isn't correctly configured for a cluster. The fix seems to be set the spring.redis.cluster.nodes property with a list of all the nodes in the Elasticache cluster (see here and here). I find this rather needless, considering that the Elasticache configuration endpoint is supposed to be used for all read and write operations (see the "Finding Endpoints for a Redis (Cluster Mode Enabled) Cluster" section here).
My question is this: can Spring Data Redis use Elasticache's configuration endpoint to perform all reads and writes, the way the AWS documentation describes? I'd rather not hand over a list of all the nodes if Spring Data Redis can use the configuration endpoint the way its meant to be used. This seems like a serious limitation to me.
Thanks in advance!
Here is what I found works:
#Bean
public RedisConnectionFactory lettuceConnectionFactory()
{
LettuceClientConfiguration config =
LettucePoolingClientConfiguration
.builder()
.*your configuration settings*
.build();
RedisClusterConfiguration clusterConfig = new RedisClusterConfiguration();
clusterConfig.addClusterNode(new RedisNode("xxx.v1tc03.clustercfg.use1.cache.amazonaws.com", 6379));
return new LettuceConnectionFactory(clusterConfig, config);
}
where xxx is the name from your elasticache cluster.

Why does my Tarantool Cartridge retrieve data from router instance sometimes?

I wonder why my tarantool cartridge cluster is not woring as it should.
I have a cartridge cluster running on kubernetes and cartridge image is generated from cartridge cli cartridge pack, and no changes were made to the those generated files.
Kubernetes cluster is deployed via helm with the following values:
https://gist.github.com/AlexanderBich/eebcf67786c36580b99373508f734f10
Issue:
When I make requests from pure php tarantool client, for example SELECT sql request it sometimes retrieves the data from storage instances but sometimes unexpectedly it responds to me with the data from router instance instead.
Same goes for INSERT and after I made same schema in both storage and router instances and made 4 requests it resulted in 2 rows being in storage and 2 being in router.
That's weird and as per reading the documentation I'm sure it's not the intended behaviour and I'm struggling to find the source of such behaviour and hope for your help.
SQL in tarantool doesn't work in cluster mode e.g. with tarantool-cartridge.
P.S. that was the response to my question from tarantool community in tarantool telegramchat

MongoDB data replication in Kubernetes

I've been configuring pods in Kubernetes to hold a mongodb and golang image each with a service to load-balance. The major issue I am facing is data replication between databases. Replication controllers/replicasets do not seem to do what the name implies, but rather is a blank-slate copy instead of a replica of existing/currently running pods. I cannot seem to find any examples or clear answers on how Kubernetes addresses this, or does it even?
For example, data insertions being sent by the Go program are going to automatically load balance to one of X replicated instances of mongodb by the service. This poses problems since they will all be maintaining separate documents without any relation to one another once Kubernetes begins to balance the connections among other pods. Is there a way to address this in Kubernetes, or does it require a complete re-write of the Go code to expect data replication among numerous available databases?
Sorry, I'm relatively new to Kubernetes and couldn't seem to find much information regarding this.
You're right, a replica set is not a replica of another container, it's just a container with the same configuration spun up within the same logical unit.
A replica set (or deployment, which is the resource you should be using now) will have multiple pods, and it's up to you, the operator, to configure the mongodb part.
I would recommend reading this example of how to set up a replica set with multiple mongodb containers:
https://medium.com/google-cloud/mongodb-replica-sets-with-kubernetes-d96606bd9474#.e8y706grr

Is it possible to set up an ejabberd cluster in master-slave mode where data is not being replicated?

I am trying to setup a very simple cluster of 2 ejabberd nodes. However, while trying to go through the official ejabberd documentation and using the join_cluster argument that comes along with the ejabberdctl script, I always end up with a multi-master cluster where both the mnesia databases have replicated data.
Is it possible to set up a ejabberd cluster in master-slave mode? And if yes, then what I am I missing?
In my understanding, a slave get the data replicated but would simply not be active. The slave needs the data to be able to take over the task of the master at some point.
It seems to means that the core of the setup you describe is not about disabling replication but about not sending traffic to the slave, no ?
In that case, this is just a matter of configuring your load balancing mechanism to route the traffic accordingly to your preference..