How do config tools like Consul "push" config updates to clients? - apache-zookeeper

There is an emerging trend of ripping global state out of traditional "static" config management tools like Chef/Puppet/Ansible, and instead storing configurations in some centralized/distributed tool, of which the main players appear to be:
ZooKeeper (Apache)
Consul (Hashicorp)
Eureka (Netflix)
Each of these tools works differently, but the principle is the same:
Store your env vars and other dynamic configurations (that is, stuff that is subject to change) in these tools as key/value pairs
Connect to these tools/services via clients at startup and pull down your config KV pairs. This typically requires the client to supply a service name ("MY_APP"), and an environment ("DEV", "PROD", etc.).
There is an excellent Consul Java client which explains all of this beautifully and provides ample code examples.
My understanding of these tools is that they are built on top of consensus algorithms such as Zab, Paxos and Gossip that allow config updates to spread almost virally, with eventual consistency, throughout your nodes. So the idea there is that if you have a myapp app that has 20 nodes, say myapp01 through myapp20, if you make a config change to one of them, that change will naturally "spread" throughout the 20 nodes over a period of seconds/minutes.
My problem is: how do these updates actually deploy to each node? In none of the client APIs (the one I linked to above, the ZooKeeper API, or the Eureka API) do I see some kind of callback functionality that can be set up and used to notify the client when the centralized service (e.g. the Consul cluster) wants to push and reload config updates.
So I ask: how is this supposed to work (dynamic config deployment and reload on clients)? I'm interested in any viable answer for any of those 3 tools, though Consul's API seems to be the most advanced IMHO.

You could use cfg4j for that. It's a Java configuration library for distributed services. It supports Consul as one of the configuration sources.

That's a nice question. I can tell how Consul HTTP client works.
I also think initially that it works in the push mechanism but while I was recently exploring Consul, I found that all Consul clients poll server for changes they want to watch. Although it is a bit different polling mechanism, Consul supports blocking queries. These are HTTP requests with a max timeout of 10 mins. This query waits until there is some change on the watched key/folder and return with the latest index. If the index is changed, the client reloads the configuration. For more info : Consul Blocking Query

Related

Programmatically create Artemis cluster on remote server

Is it possible to programmatically create/update a cluster on a remote Artemis server?
I will have lots of docker instances and would rather configure on the fly than have to set in XML files if possible.
Ideally on app launch I'd like to check if a cluster has been set up and if not create one.
This would probably involve getting the current server configuration and updating it with the cluster details.
I see it's possible to create a Configuration.
However, I'm not sure how to get the remote server configuration, if it's at all possible.
Configuration config = new ConfigurationImpl();
ClusterConnectionConfiguration ccc = new ClusterConnectionConfiguration();
ccc.setAddress("231.7.7.7");
config.addClusterConfiguration(ccc);
// need a way to get and update the current server configuration
ActiveMQServer.getConfiguration();
Any advice would be appreciated.
If it is possible, is this a good approach to take to configure on the fly?
Thanks
The org.apache.activemq.artemis.core.config.impl.ConfigurationImpl object can be used to programmatically configure the broker. The broker test-suite uses this object to configure broker instances. However, this object is not available in any remote sense.
Once the broker is started there is a rich management API you can use to add things like security settings, address settings, diverts, bridges, addresses, queues, etc. However, the changes made by most (although not all) of these operations are volatile which means many of them would need to be performed every time the broker started. Furthermore, there are no management methods to add cluster connections.
You might consider using a tool like Ansible to manage the configuration or even roll your own solution with a templating engine like FreeMarker to customize the XML and then distribute it to your Docker instances using some other technology.

How to sync client and backend while are in different repositories and having different pipelines

That's not an actual problem that I have but I would like to know what are the different approach that people are taking in order to solve a very common scenario.
You have one or many microservices, and each of those have schemas and an interface that clients are using to consume resources.
We have a website in a different repo that is consuming data from one of those microservices, let's say REST API.
Something like
Microservice (API): I change the interface meaning that the JSON response is different.
Frontend: I make changes in the frontend to adapt the response from the microservice.
If we deploy the Microservice before deploying the frontend you will brake the frontend site.
So you need to make sure that some have deployed the new version and then deploy the microservice.
This is the manual approach but hos is the people tracking that in an automated way like not be able to make a deployment without having the correct version of the frontend deployed.
One of the safest one is trying to be always backward compatible by using versioning on service level that means having different version of the same service when you need to introduce a backward incompatible change.
Lets assume you have a microservice which serves products in a rest endpoint like this
/api/v1/products
when you do your backward incompatible change you should introduce the new version by keeping the existing one still working
/api/v1/products
/api/v2/products
You should set a sunset for your first service endpoint and communicate this with your clients. In your case it is the frontend part but in other situations there could be so many other client out there (different frontend services, different backend services etc.)
The drawback of this approach you may need to support several version of the same service which could be tricky but it is inevitable. Communication with clients would also be tricky in many situation.
On the other hand it gives you true power of microservice isolation and freedom.
I think If you use docker in your DevOps env you can use docker-compose with depends_on property depends_on startup-order OR you should create a script bash (for example) that check the correct version of the frontend deployed before continue and included in your pipeline

Application Performance monitoring on Swisscom Application Cloud

I am investigating options for monitoring our installation in Swisscom's cloud-foundry. My objectives are the following:
monitor performance indicators for deployed application (such as cpu, disk, memory)
monitor performance indicators for services (slow queries, number of queries, ideally also some metrics on hitting quotas)
So far, I understand the options are the following (including some BUTs):
I used a very nice TOP cf-plugin (github)
This works very well. It seems that it registers itself to get the required firehose nozzles and consume data.
That is very useful for tracing / ad-hoc monitoring, but not very good for a serious infrastructure monitoring.
Another way I found is to use firehose-syslog solution.
This can be deployed as an app to (as far as I understand) do the job in similar way, as the TOP cf plugin.
The problem is, that it requires registered client, so it can authenticate with the doppler endpoint. For some reason, the top-cf-plugin does that automatically / in another way.
Last option i am considering is to build the monitoring itself to the App (using a special buildpack)
That can be for example done with Datadog. But it seems to also require a dedicated uaa client to register the Nozzle.
I would like to check, if somebody is (was) on the similar road, has some findings.
Eventually I would like to raise the following questions towards the swisscom community support:
is it possible to register uaac client to be able to ingest events through the firehose nozzle from external service? (this requires admin credentials if I was reading correctly)
is there an alternative way to authenticate with the nozzle (for example using a special user and his authentication token?)
is there any alternative to monitor the CF deployments in Swisscom? Eventually, is there a paper, blogpost or other form of documentation, that would be helpful in this respect (also for other users of AppCloud)?
Since it requires admin permissions, we can not give out UAA clients for the firehose.
However, there are different ways to get metrics in context of a user.
CF API
You can obtain basic metrics of a specific app by polling the CF API:
https://apidocs.cloudfoundry.org/5.0.0/apps/get_detailed_stats_for_a_started_app.html
However, since you have to poll (and for each app), it's not the recommended way.
Metrics in syslog drain
CF allows devs to forward their logs to syslog drains; in more recent versions, CF also sends metrics to this syslog drain (see https://docs.cloudfoundry.org/devguide/deploy-apps/streaming-logs.html#container-metrics).
For example, you could use Swisscom's Elasticsearch service to store these metrics and then analyze it using Kibana.
Metrics using loggregator (firehose)
The firehose allows streaming logs to clients for two types of roles:
Streaming all logs to admins (which requires a UAA client with admin permissions) and streaming app logs and metrics to devs with permissions in the app's space. This is also what the cf logs command uses. cf top also works this way (it enumerates all apps and streams the logs of each app).
However, you will find out that most open source tools that leverage the firehose only work in admin mode, since they're written for the platform operator.
Of course you also have the possibility to monitor your app by instrumenting it (white box approach), for example by configuring Spring actuator in a Spring boot app or by including an agent of your favourite APM vendor (Dynatrace, AppDynamics, ...)
I guess this is the most common approach; we've seen a lot of teams having success by instrumenting their applications. Especially since advanced monitoring anyway requires you to create your own metrics as the firehose provided cpu/memory metrics are not that powerful in a microservice world.
However, option 2. would be worth a try as well, especially since the ELK's stack metric support is getting better and better.

How do micro services in Cloud Foundry communicate?

I'm a newbie in Cloud Foundry. In following the reference application provided by Predix (https://www.predix.io/resources/tutorials/tutorial-details.html?tutorial_id=1473&tag=1610&journey=Connect%20devices%20using%20the%20Reference%20App&resources=1592,1473,1600), the application consisted of several modules and each module is implemented as micro service.
My question is, how do these micro services talk to each other? I understand they must be using some sort of REST calls but the problem is:
service registry: Say I have services A, B, C. How do these components 'discover' the REST URLs of other components? As the component URL is only known after the service is pushed to cloud foundry.
How does cloud foundry controls the components dependency during service startup and service shutdown? Say A cannot start until B is started. B needs to be shutdown if A is shutdown.
The ref-app 'application' consists of several 'apps' and Predix 'services'. An app is bound to the service via an entry in the manifest.yml. Thus, it gets the service endpoint and other important configuration information via this binding. When an app is bound to a service, the 'cf env ' command returns the needed info.
There might still be some Service endpoint info in a property file, but that's something that will be refactored out over time.
The individual apps of the ref-app application are put in separate microservices, since they get used as components of other applications. Hence, the microservices approach. If there were startup dependencies across apps, the CI/CD pipeline that pushes the apps to the cloud would need to manage these dependencies. The dependencies in ref-app are simply the obvious ones, read-on.
While it's true that coupling of microservices is not in the design. There are some obvious reasons this might happen. Language and function. If you have a "back-end" microservice written in Java used by a "front-end" UI microservice written in Javascript on NodeJS then these are pushed as two separate apps. Theoretically the UI won't work too well without the back-end, but there is a plan to actually make that happen with some canned JSON. Still there is some logical coupling there.
The nice things you get from microservices is that they might need to scale differently and cloud foundry makes that quite easy with the 'cf scale' command. They might be used by multiple other microservices, hence creating new scale requirements. So, thinking about what needs to scale and also the release cycle of the functionality helps in deciding what comprises a microservice.
As for ordering, for example, the Google Maps api might be required by your application so it could be said that it should be launched first and your application second. But in reality, your application should take in to account that the maps api might be down. Your goal should be that your app behaves well when a dependent microservice is not available.
The 'apps' of the 'application' know about each due to their name and the URL that the cloud gives it. There are actually many copies of the reference app running in various clouds and spaces. They are prefaced with things like Dev or QA or Integration, etc. Could we get the Dev front end talking to the QA back-end microservice, sure, it's just a URL.
In addition to the aforementioned, etcd (which I haven't tried yet), you can also create a CUPS service 'definition'. This is also a set of key/value pairs. Which you can tie to the Space (dev/qa/stage/prod) and bind them via the manifest. This way you get the props from the environment.
If micro-services do need to talk to each other, generally its via REST as you have noticed.However microservice purists may be against such dependencies. That apart, service discovery is enabled by publishing available endpoints on to a service registry - etcd in case of CloudFoundry. Once endpoint is registered, various instances of a given service can register themselves to the registry using a POST operation. Client will need to know only about the published end point and not the individual service instance's end point. This is self-registration. Client will either communicate to a load balancer such as ELB, which looks up service registry or client should be aware of the service registry.
For (2), there should not be such a hard dependency between micro-services as per micro-service definition, if one is designing such a coupled set of services that indicates some imminent issues such as orchestrating and synchronizing. If such dependencies do emerge, you will have rely on service registries, health-checks and circuit-breakers for fall-back.

How to deploy Node.js in cloud for high availability using multi-core, reverse-proxy, and SSL

I have posted this to ServerFault, but the Node.js community seems tiny there, so I'm hoping this bring more exposure.
I have a Node.js (0.4.9) application and am researching how to best deploy and maintain it. I want to run it in the cloud (EC2 or RackSpace) with high availability. The app should run on HTTPS. I'll worry about East/West/EU full-failover later.
I have done a lot of reading about keep-alive (Upstart, Forever), multi-core utilities (Fugue, multi-node, Cluster), and proxy/load balancers (node-http-proxy, nginx, Varnish, and Pound). However, I am unsure how to combine the various utilities available to me.
I have this setup in mind and need to iron out some questions and get feedback.
Cluster is the most actively developed and seemingly popular multi-core utility for Node.js, so use that to run 1 node "cluster" per app server on non-privileged port (say 3000). Q1: Should Forever be used to keep the cluster alive or is that just redundant?
Use 1 nginx per app server running on port 80, simply reverse proxying to node on port 3000. Q2: Would node-http-proxy be more suitable for this task even though it doesn't gzip or server static files quickly?
Have minimum 2x servers as described above, with an independent server acting as a load balancer across these boxes. Use Pound listening 443 to terminate HTTPS and pass HTTP to Varnish which would round robin load balance across the IPs of servers above. Q3: Should nginx be used to do both instead? Q4: Should AWS or RackSpace load balancer be considered instead (the latter doesn't terminate HTTPS)
General Questions:
Do you see a need for (2) above at all?
Where is the best place to terminate HTTPS?
If WebSockets are needed in the future, what nginx substitutions would you make?
I'd really like to hear how people are setting up current production environments and which combination of tools they prefer. Much appreciated.
It's been several months since I asked this question and not a lot of answer flow. Both Samyak Bhuta and nponeccop had good suggestions, but I wanted to discuss the answers I've found to my questions.
Here is what I've settled on at this point for a production system, but further improvements are always being made. I hope it helps anyone in a similar scenario.
Use Cluster to spawn as many child processes as you desire to handle incoming requests on multi-core virtual or physical machines. This binds to a single port and makes maintenance easier. My rule of thumb is n - 1 Cluster workers. You don't need Forever on this, as Cluster respawns worker processes that die. To have resiliency even at the Cluster parent level, ensure that you use an Upstart script (or equivalent) to daemonize the Node.js application, and use Monit (or equivalent) to watch the PID of the Cluster parent and respawn it if it dies. You can try using the respawn feature of Upstart, but I prefer having Monit watching things, so rather than split responsibilities, I find it's best to let Monit handle the respawn as well.
Use 1 nginx per app server running on port 80, simply reverse proxying to your Cluster on whatever port you bound to in (1). node-http-proxy can be used, but nginx is more mature, more featureful, and faster at serving static files. Run nginx lean (don't log, don't gzip tiny files) to minimize it's overhead.
Have minimum 2x servers as described above in a minimum of 2 availability zones, and if in AWS, use an ELB that terminates HTTPS/SSL on port 443 and communicates on HTTP port 80 to the node.js app servers. ELBs are simple and, if you desire, make it somewhat easier to auto-scale. You could run multiple nginx either sharing an IP or round-robin balanced themselves by your DNS provider, but I found this overkill for now. At that point, you'd remove the nginx instance on each app server.
I have not needed WebSockets so nginx continues to be suitable and I'll revisit this issue when WebSockets come into the picture.
Feedback is welcome.
You should not bother serving static files quickly. If your load is small - node static file servers will do. If your load is big - it's better to use a CDN (Akamai, Limelight, CoralCDN).
Instead of forever you can use monit.
Instead of nginx you can use HAProxy. It is known to work well with websockets. Consider also proxying flash sockets as they are a good workaround until websocket support is ubiquitous (see socket.io).
HAProxy has some support for HTTPS load balancing, but not termination. You can try to use stunnel for HTTPS termination, but I think it's too slow.
Round-robin load (or other statistical) balancing works pretty well in practice, so there's no need to know about other servers' load in most cases.
Consider also using ZeroMQ or RabbitMQ for communications between nodes.
This is an excellent thread! Thanks to everyone that contributed useful information.
I've been dealing with the same issues the past few months setting up the infrastructure for our startup.
As people mentioned previously, we wanted a Node environment with multi-core support + web sockets + vhosts
We ended up creating a hybrid between the native cluster module and http-proxy and called it Drone - of course it's open sourced:
https://github.com/makesites/drone
We also released it as an AMI with Monit and Nginx
https://aws.amazon.com/amis/drone-server
I found this thread researching how to add SSL support to Drone - tnx for recommending ELB but I wouldn't rely on a proprietary solution for something so crucial.
Instead I extended the default proxy to handle all the SSL requests. The configuration is minimal while the SSL requests are converted to plain http - but I guess that's preferable when you're passing traffic between ports...
Feel free to look into it and let me know if it fits your needs. All feedback welcomed.
I have seen AWS load balancer to load balance and termination + http-node-proxy for reverse proxy, if you want to run multiple service per box + cluster.js for mulicore support and process level failover doing extremely well.
forever.js on cluster.js could be good option for extreme care you want to take in terms of failover but that's hardly needed.