Erlang: How do you reload an application env configuration? - version-control

How do you reload an application's configuration? Or, what are good strategies for managing dynamic application configuration?
For example, let's say I had log levels and I wanted to change them at runtime. Also, let's assume this is one of many such options. Does it make sense to have a "configuration server" that holds configuration state for other parts of the application to query? Do people do that or did I just make it up?

I believe it's reasonable to keep all your configuration data in a repository (subversion, mercurial etc.) and have applications download it every time they start or attempt to reload some their configuration options. This is centralized approach — however you could have many configuration servers to avoid SPOF — and it:
allows you to keep track of changes so that you
know who put these and when (s)he did
that (none wants to be in charge of
unproper configuration);
enables you to use the same configuration for
all applications throughout you
network;
easiness of changes: you can just modify
configuration and notify concerned applications
using gen_server:abcast call or other means.
proplists(3) are useful when reading configuration.

If my understanding is correct, the problem is the following:
You want to create a distributed, scalable system and of course Erlang is the first choice that comes into mind, since it was designed for such purposes.
You will have several nodes that will be running local applications and also distributed applications as well.
Here the simplest hierarchy is to have a hot-standby backup for every major functionality.
This can be achieved by implementing a distributed application controller.
Simplest example is to have a server start on a node, while a slave server is started simultaneously on a mate node.
Distributed Application controllers have many advantages.
Easy example is to handle node_up messages differently by introducing new messages that indicate that a node is not only erlang VM ready, but all vital applications are running. This way the mate node can be sure that the stand-by node is ready and can start sync-ing.
Please elaborate or comment if I misunderstood something.
Good luck!

Related

Microservices communication model

Consider microservices architecture, where you need to expose functionality to manage simple configuration shared with different microservices. Configuration is not changing often, but still, I would like to see changes whenever I ask for any value.
Using REST microservice seems easy, but it is adding latency.
Alternative could be RPC over messaging (i.e. RabbitMQ), but interface becomes more complicated.
What communication are you using for internal, simple services and what are pros and cons?
Any examples?
I tried with REST API, but it means a lot of "slow" requests, which add a latency to overall requests.
I've found that using RESTful APIs with some judicious implementation of cache-control headers actually works fairly well for this use case. The biggest challenge is ensuring that the HTTP client underneath your REST client actually respects the things.
It's fairly easy to implement, fits nicely into HTTP, and generally scales really well. It gives control to the client to decide if they want to respect the caching suggestions, allows server to optimize if it "knows" the configs haven't change (304 Not modified) to optimize if the client wants to ask for new versions.
You don't have to get into anything too complicated from a cache-invalidation, and you can leverage things like edge caching to further accelerate things in interesting ways.
The question to ask is ultimately the extent to which it is a requirement that a change to the configuration immediately affects everything.
If that's actually a requirement, then we're talking about strong consistency which implies some combination of:
all other processing must be effectively executed one-at-a-time against the (there can only ultimately be one: if there's multiple, then they will be affected at different times) component against which the change is made
all other processing must stop for the duration of time that it takes to propagate the change to all components
(these can be combined: you can have multiple instances depend on the configuration and stop for as long as it takes to update those and then you can execute things in parallel... an example of this is making it static configuration in the dependent services and taking them all down to update the configuration: if these updates are sufficiently rare, you can fit them into your error/downtime budget)
Needless to say, there's a (likely surprisingly small) consistency budget you're dealing with.
If you don't actually need strong absolute consistency like I've described (and the set of problems which actually need it is perhaps surprisingly small: anything to do with money for instance doesn't actually need strong consistency because it's only money), then it's a question of how much inconsistency is acceptable (typically you'll quantify this with some sort of bounded staleness and a liveness guarantee that you don't go back in time (unless there's a really good reason to go back in time...)). At this point, we've established that you want eventual consistency, we're just haggling over "how eventual?".
For this, propagating the configuration changes via durable publish-subscribe log (Kafka being the exemplar of this approach) is probably the place to start. Components subscribe to this log and update local state as it changes (and probably store the log position and the last value in some local store to prevent inadvertently going backward in time when they initially read the log). Then you can distribute the configuration so that it's in local memory of the subscribers, though during an update, there will be a window where different subscribers will have different views of that configuration.
A lot of solutions exist to externalize microservice configuration to a central location depending on what frameworks/programming languages you used to build your services. If it happened you would be using Spring, take a look at Spring Cloud Config. Off course Eureka is not the only solution tailored for this purpose.

How to store shared-by-same-instances data in spring microservices architecture

following situation: I am building a system that requires redundant microservices for failover or loadbalancing. So I am starting two (or more instances of a service) of for example a simple core rest service that provides data.
My Question is: How would you store the data? Using two JPA-instances to access the same database (both writing and reading) will result in problems, especially in layer 2 caching and in consistency. Since the database must be redundent itself (requirement) it might be possible to make each service instance accessing its own database, but how would you synchronize them? Is there any common solution for this?
Thanks in advance!
If you truly need a multi-master consistent database, then you will almost definitely need to implement this at the database layer.
I would not cache things that are transactionally sensitive. If you truly need to do this, and cannot specify a reasonable TTL in which content can be stale, then you will need to set up a pub/sub sort of mechanism to expire modified entities. A lot of this really depends on your data, how often it changes, can you separate cacheable vs non-cacheable data? These questions strongly influence your caching decisions.
If you don't want to re-invent master-master replication (which will be highly non-trivial), I suggest you choose a DB system that supports this out of the box.
This will not solve all your problems out of the box, but at least it solves the hard part of the problem. What you still will need to do is e.g. defining and implementing a conflict resolution strategy.
A good choice for a master-master DB system is CouchDB. It is open source and there are also service providers available, in case you don't want to host the DB by yourself. I'm sure there are other DB systems that provide master-master replication as well.
There are two completely separate layer in your case.
One for application servers and another one for database.
If you really need a scalable system -I think you need because you are mentioning about load balancing- then you should remove all the states out from you application .
For example you should not use layer 2 caching in your application instance instead you should use some external service like redis or memcache.
And you should use just one master database instance for writes and another replicate waiting for failover. To do that we are using Amazon RDS MultiAZ instances. There is just one master database which is replicated to another instance. In case of crash or something the second database is automatically set as master in a couple of seconds.

Learning Zookeeper - Help me with example

I'm trying to wrap my head around Zookeeper and what it does. To this point, my experience with Zookeeper has been through other libraries that require Zookeeper (Solr and Kafka) and so my basic understand is the very vague "you better use Zookeeper to keep your configuration straight".
So help me think through a simple example problem. Let's say that I build my own service that does "stuff". There are two things that I want to protect:
I want to have as little downtime as possible (gotta keep doing stuff).
I can not have more than one server doing stuff because bad things would happen.
So, how would I set this up in Zookeeper? Is Zookeeper responsible for starting another stuff server if one goes down? Or do I subscribe to a Zookeeper "stuff doer status" callback? If I erroneously start up two stuff servers, how does Zookeeper help me keep bad things from happening?
Zookeeper is a distributed lock manager. These systems provide features like coordinator election (aka "master election" or "leader election") for a distributed system, as well as provide a consistent, distributed access to small amounts of critical information which is frequently used for configuration (i.e., don't treat it like a database or a general file system).
Note that Zookeeper does not manage your service, but you can use Zookeeper to keep a hot standby (or several) such that in case of one master failing, another one will take over, so you would run N replicas of your servers, such that one of the working instances can take over immediately if the current leader goes down or becomes unavailable for any reason.
Using master election, you can choose to have two (or more) servers, but only one of them will be able to take the master lock, so only that one will be able to take action. As soon as it goes away, it will lose its claim to the lock, and your hot standby will pick up the lock and start doing work that you need it to do. Look at Zookeeper recipes for code samples. However, properly handing off work, checkpointing, and general service resilience is still up to you to design and implement.
That said, Zookeeper and similar systems provide a solid foundation to enable you to build robust distributed systems.
Other systems similar to Zookeeper include (alphabetically):
Chubby
doozerd
etcd
Several of these have detailed comparisons written up on their respective websites to show how they differ from the others in the list.

How to connect meteor to an existing backend?

I recently discovered Meteor, and I really love the simplicity that it brings to programming new apps. My question is: how do you connect it to an existing back-end? We have a substantial amount of existing Clojure code, also running with MongoDB. What I would like to do is use Meteor to build the front-end of my app. I guess I could connect my Meteor app directly to the MongoDB instance of the back-end, but this does not seem like a good practice... or is it?
Another option I imagined was to access the DB from either the webapp or the Clojure code and create a separate way of communication between the two with a queue mechanism, or sockets. Any hint or pointer to relevant documentation would be helpful!
Take a look at Meteor's environment variable settings. By setting these variables you can easily define an external MongoDB instance. In particular it would be
$export MONGO_URL="mongodb://yourmongodbserver/your-db"
There is a screencast of eventedmind.com for this specific topic https://eventedmind.com/feed/sg3ejYnmhxpBNoWan which is quite resourceful.
Regarding the "how" to point them to the same, #Michael's answer is spot on; just point your Meteor web servers at the same MongoDB.
Regarding whether or not you should, that depends on your situation. Having everything run off the same DB certainly simplifies things.
Having separate dbs can potentially reduce the load on your db tier as you could selectively choose which writes/updates to replicate between the clojure and Meteor dbs.
One issue with either method is speed of notification of changes. Currently, Meteor servers poll the DB every 10 secs to recognize changes. Happily, once the oplog branch gets merged into master, it will give a large speed improvement in how quickly external changes made in the DB (as opposed to directly through a Meteor server) are reflected in the Meteor clients. The oplog support will enable Meteor servers to emulate a replica-set instance, tailing the oplog which will mean practically instant notification of db changes.
Using a queue as a middle-ware layer introduces complexity and adds another point of failure. It also increases latency of notification. These issues can be mitigated, though, and there may be other pieces of your infrastructure in the future that would benefit from such a middle-ware queue. For example, other interested systems could register with the queue to receive notification of changes without querying or needing to know about your db. You can also scale your MongoDB instances independently and tune the queue to determine what "eventually" means in the "eventually consistent" guarantee.
I think the questions to ask are:
how much overlap is there between the clojure dataset and the Meteor dataset
how quickly do you need changes to be reflected between the two
will a middle-ware queue be useful in other circumstances as you grow
Regarding possible queue technologies to look into, I've heard very good things about RabbitMQ. The Oct. 2013 talk at the Clojure NYC meetup included a description of switching to RabbitMQ from Amazon SQS due to latency issues with SQS and anecdotally RabbitMQ has been rock-solid for them.

Scala + Akka: How to develop a Multi-Machine Highly Available Cluster

We're developing a server system in Scala + Akka for a game that will serve clients in Android, iPhone, and Second Life. There are parts of this server that need to be highly available, running on multiple machines. If one of those servers dies (of, say, hardware failure), the system needs to keep running. I think I want the clients to have a list of machines they will try to connect with, similar to how Cassandra works.
The multi-node examples I've seen so far with Akka seem to me to be centered around the idea of scalability, rather than high availability (at least with regard to hardware). The multi-node examples seem to always have a single point of failure. For example there are load balancers, but if I need to reboot one of the machines that have load balancers, my system will suffer some downtime.
Are there any examples that show this type of hardware fault tolerance for Akka? Or, do you have any thoughts on good ways to make this happen?
So far, the best answer I've been able to come up with is to study the Erlang OTP docs, meditate on them, and try to figure out how to put my system together using the building blocks available in Akka.
But if there are resources, examples, or ideas on how to share state between multiple machines in a way that if one of them goes down things keep running, I'd sure appreciate them, because I'm concerned I might be re-inventing the wheel here. Maybe there is a multi-node STM container that automatically keeps the shared state in sync across multiple nodes? Or maybe this is so easy to make that the documentation doesn't bother showing examples of how to do it, or perhaps I haven't been thorough enough in my research and experimentation yet. Any thoughts or ideas will be appreciated.
HA and load management is a very important aspect of scalability and is available as a part of the AkkaSource commercial offering.
If you're listing multiple potential hosts in your clients already, then those can effectively become load balancers.
You could offer a host suggestion service and recommends to the client which machine they should connect to (based on current load, or whatever), then the client can pin to that until the connection fails.
If the host suggestion service is not there, then the client can simply pick a random host from it internal list, trying them until it connects.
Ideally on first time start up, the client will connect to the host suggestion service and not only get directed to an appropriate host, but a list of other potential hosts as well. This list can routinely be updated every time the client connects.
If the host suggestion service is down on the clients first attempt (unlikely, but...) then you can pre-deploy a list of hosts in the client install so it can start immediately randomly selecting hosts from the very beginning if it has too.
Make sure that your list of hosts is actual host names, and not IPs, that give you more flexibility long term (i.e. you'll "always have" host1.example.com, host2.example.com... etc. even if you move infrastructure and change IPs).
You could take a look how RedDwarf and it's fork DimDwarf are built. They are both horizontally scalable crash-only game app servers and DimDwarf is partly written in Scala (new messaging functionality). Their approach and architecture should match your needs quite well :)
2 cents..
"how to share state between multiple machines in a way that if one of them goes down things keep running"
Don't share state between machines, instead partition state across machines. I don't know your domain so I don't know if this will work. But essentially if you assign certain aggregates ( in DDD terms ) to certain nodes, you can keep those aggregates in memory ( actor, agent, etc ) when they are being used. In order to do this you will need to use something like zookeeper to coordinate which nodes handle which aggregates. In the event of failure you can bring the aggregate up on a different node.
Further more, if you use an event sourcing model to build your aggregates, it becomes almost trivial to have real-time copies ( slaves ) of your aggregate on other nodes by those nodes listening for events and maintaining their own copies.
By using Akka, we get remoting between nodes almost for free. This means that which ever node handles a request that might need to interact with an Aggregate/Entity on another nodes can do so with RemoteActors.
What I have outlined here is very general but gives an approach to distributed fault-tolerance with Akka and ZooKeeper. It may or may not help. I hope it does.
All the best,
Andy