Can "spring.cloud.consul.host" config value have multiple Consul agents? - spring-cloud

I'm a bit confused with this configuration. My Spring Boot app with #EnableDiscoveryClient has spring.cloud.consul.host set to localhost. I'm running a Consul Agent on the host where my Boot app is running, but I've a few questions (can't seem to find my answers in the documentation).
Can this config accept multiple values?
If so, I'd prefer to set the values to a list of Consul server addresses (but then, what's the point of running Consul Agents at all, so this doesn't seem practical, which means I'm not understanding something here)
If not, are we expected to run a Consul Agent on every node a Boot app with #EnableDiscoveryClient is running? (this feels wrong as well; for one, this would seem like a single point of failure even though one agent should be able to tell everything about the cluster; what if I can't contact this one agent?)
What's the best practice for this configuration?

Actuallly this is Consul itself to solve your problem. An agent is runing on every server to handle clustering, failures, sharing data, autodiscovery etc. for you so that you don't neen to know the other hosts in your Spring Boot configuration. Spring Boot app always connects to the agent running on the same machine.
See https://www.consul.io/docs/agent/basics.html

Related

Cassandra Kubernetes Statefulset NoHostAvailableException

I have an application deployed in kubernetes, it consists of cassandra, a go client, and a java client (and other things, but they are not relevant for this discussion).
We have used helm to do our deployment.
We are using a stateful set and a headless service for cassandra.
We have configured the clients to use the headless service dns as a contact point for cluster creation.
Everything works great.
Until all of the nodes go down, or some other nefarious combination of nodes going down, I am simulating it by deleting all pods using kubectl delete in succession on all of the cassandra nodes.
When I do this the clients throw NoHostAvailableException
in java its
"java.util.concurrent.ExecutionException: com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: /10.200.23.151:9042 (com.datastax.driver.core.exceptions.UnavailableException: Not enough replicas available for query at consistency LOCAL_QUORUM (1 required but only 0 alive)), /10.200.152.130:9042 (com.datastax.driver.core.exceptions.UnavailableException: Not enough replicas available for query at consistency ONE (1 required but only 0 alive)))"
which eventually becomes
"java.util.concurrent.ExecutionException: com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (no host was tried)"
in go its
"gocql: no hosts available in the pool"
I can query cassandra using cqlsh, the node seems fine using nodetool status, all of the new ips are there
the image I am using doesnt have netstat so I have not yet confirmed its listening on the expected port.
Via executing bash on the two client pods I can see the dns makes sense using nslookup, but...
netstat does not show any established connections to cassandra (they are present before I take the nodes down)
If I restart my clients everything works fine.
I have googled a lot (I mean a lot), most of what I have found is related to never having a working connection, the most relevant things seem very old (like 2014, 2016).
So a node going down is very basic and I would expect everything to work, the cassandra cluster manages itself, it discovers new nodes as they come online, it balances the load, etc. etc.
If I take my all of my cassandra nodes down slowly, one at a time, everything works fine (I have not confirmed that the load is distributed appropriately and to the correct node, but at least it works)
So, is there a point where this behaviour is expected? ie I have taken everything down, nothing was up and running before the last from the first cluster was taken down.. is this behaviour expected?
To me it seems like it should be an easy issue to resolve, not sure whats missing / incorrect, I am surprised that both clients show the same symptoms, makes me think something is not happening with our statefulset and service
I think the problem might lie in the headless DNS service. If all of the nodes go down completely and there are no nodes at all available via the service until pods are replaced, it could cause the driver to hang.
I've noted that you've used Helm for your deployments but you may be interested in this document for connecting to Cassandra clusters in Kubernetes from the authors of the cass-operator.
I'm going to contact some of the authors and get them to respond here. Cheers!

Handle upgrades with spring boot admin

I am using SBA for monitoring our microservices within AWS ecs clusters.
All looks OK, except upgrades, e.g when we spin new version of service we shutdown the old one once it becomes healthy. The thing is that the old one is shown as down and starts issuing notifications util we manually remove it.
Any solution ?
I tried to use the instance de-reregistration setting but it doesn't work well since ECS probably just kills the tasks and not gracefully shuts down the context.
you can issue a DELETE request to /api/applications/<id> during your deployment scripts to remove the application from the admin server

How to make the Eureka server strong?

I am new to Spring Cloud. Currently, I want to build a new micro service based on Spring Cloud. It is very easy to build a new Eureka server. But my question is that how to make it high availability ? For example I create two Eureka server and a load balancer. When one of the Eureka server is down, the system still works well. But I don't know to to consist registered information in the two Eureka server.
I have already asked something similar in the spring cloud gitter channel.
Because of the CAP theorem, something as a distributes Service discovery has to decide, either to provide availability, or more consistency, with a trade off to the other one.
in short, by quoting Spencer Gibb:
Eureka favors availability over consistency
so it is very available, while registred services may be not acutal anymore.
As Spencer suggested, if consistency is something you need more then availability, try Consul together with spring cloud consul intead

Adding standalone zookeeper servers into fabric ensemble

We're attempting to add standalone zookeeper servers into the zk ensemble ran by fuse fabric (as either followers or observers). However, it looks like fabric has pretty tight control over the zk configuration and I haven't been able to find any documentation relating to adding hardcoded server configuration params to the dynamic ones used by fabric. Anyone else try this or have some idea of where to look?

Questions Concerning Using Celery with Multiple Load-Balanced Django Application Servers

I'm interested in using Celery for an app I'm working on. It all seems pretty straight forward, but I'm a little confused about what I need to do if I have multiple load balanced application servers. All of the documentation assumes that the broker will be on the same server as the application. Currently, all of my application servers sit behind an Amazon ELB and tasks need to be able to come from any one of them.
This is what I assume I need to do:
Run a broker server on a separate instance
Configure each application instance to connect to that broker server
Each application instance will also be be a celery working (running
celeryd)?
My only beef with that is: What happens if my broker instance dies? Can I run 2 broker instances some how so I'm safe if one goes under?
Any tips or information on what to do in a setup like mine would be greatly appreciated. I'm sure I'm missing something or not understanding something.
For future reference, for those who do prefer to stick with RabbitMQ...
You can create a RabbitMQ cluster from 2 or more instances. Add those instances to your ELB and point your celeryd workers at the ELB. Just make sure you connect the right ports and you should be all set. Don't forget to allow your RabbitMQ machines to talk among themselves to run the cluster. This works very well for me in production.
One exception here: if you need to schedule tasks, you need a celerybeat process. For some reason, I wasn't able to connect the celerybeat to the ELB and had to connect it to one of the instances directly. I opened an issue about it and it is supposed to be resolved (didn't test it yet). Keep in mind that celerybeat by itself can only exist once, so that's already a single point of failure.
You are correct in all points.
How to make reliable broker: make clustered rabbitmq installation, as described here:
http://www.rabbitmq.com/clustering.html
Celery beat also doesn't have to be a single point of failure if you run it on every worker node with:
https://github.com/ybrs/single-beat