Akka cluster and OpenShift - scala

I'm new to Akka Clusters, however as I am understanding its documentation, I need to know at least one "seed node" to join an existing cluster.
So when using clusters with OpenShift I would need to know if the current gear is the first node - then I would create a new cluster - or if there are already some other gears around - I would need to know at least one of their IPs to join them.
Is this possible with OpenShift cloud? (I'm using the DIY catridge, so customizing the start up script wouldn't be a problem. However I can't find any environment variable which provides me relevant data.)

DIY gears on OpenShift Online do not scale. And if you are spinning up separate applications for each of the nodes in your cluster, you are going to (probably) run into inter-gear communication issues. You might need to create your own akka cartridge (http://docs.openshift.org/origin-m4/oo_cartridge_developers_guide.html), then you can set your own scaling options. You might check out this cartridge (https://github.com/smarterclayton/openshift-redis-cart) which supports scaling and might give you some ideas about how to implement yours.

Related

How to avoid congestion when using Kubernetes pods as Jenkins slaves

Our usecase is pretty simple, however, I haven't found a solution for it yet.
In the organization I'm working at, we decided to move to Kubernetes as our container manager in order to spin-up slaves.
Until we moved to this kind of environment, we used to have dedicated slaves per each team. Each got the resources it needs and based on that, it was working.
However, when we moved to use Kubernetes, it started to cause issues as each team shares the same pile of resources, which, can lead to congestion or job failures.
The suggested solution was to create Kubernetes cluster per each team, however, this will lead to burnout of the teams involved with maintanance of multiple clusters.
Searching online, I didn't found any solution avilable, hence, I'm asking here - what is the best way to approach the solution? I understand that we might need to implament a dispacher, but currently it's not possible in the way the Kubernetes plugin is developed.
Thanks,

unable to understand bgchaindb behaviour?

I started to implement bigchainDB.I have followed the tutorial from here
I have setup two nodes running bighchainDB server running & also mongoDB. I have added node id and address of nodes to each configuration so that they can connect.I am able to create transactions on each node.So my questions are as follow
How two nodes communicate and sync data with each other.
How consensus is achieved ?
Why this tutorial is created for setting up cluster ?
BigchainDB nodes communicate with each other using Tendermint P2P protocol. Consensus is Tendermint consensus. To understand those better, here are some starting points:
The BigchainDB 2.0 Whitepaper
The Tendermint website and docs
Also, please ignore the old docs for versions of BigchainDB Server older than 2.0.0x.

How can I have a GKE cluster "expire" and delete itself?

We stand up a lot of clusters for testing/poc/deving and its up to us to remember to delete them
What I would like is a way of setting a ttl on an entire gke cluster and having it get deleted/purged automatically.
I could tag the clusters with a timestamp at creation and have an external process running on a schedule that reaps old clusters, but it'd be great if I didn't have to do that- it might be the only way but maybe there is a gke/k8s feature for this?
Is there a way to have the cluster delete itself without relying on an external service? I suppose it could spawn a cloud function itself- but Im wondering if there is a native gke/k8s feature to do this more elegantly
You can spawn GKE cluster with Alpha features. Such clusters exist for one month maximum and then are auto-deleted.
Read more: https://cloud.google.com/kubernetes-engine/docs/concepts/alpha-clusters
Try Cloud Scheduler and hook it up with your build server. Cloud Scheduler supports Http , App Engine , Pub/Sub endpoints.
I don't believe there is a native way to do this, but it doesn't seem unreasonable to use cloud scheduler to every so often trigger a cloud function which looks for appropriately labeled clusters and triggers their deletion via the API.

How is red/black deployment strategy achieved?

I recently ran across this Netflix Blog article http://techblog.netflix.com/2013/08/deploying-netflix-api.html
They are talking about red/black deployment where they run the old and new code side by side and direct the production traffic to both of them. If something goes wrong they do a rollback.
How does the directing of the traffic work? and is it possible to adapt this strategy with e.g two Docker containers?
One way of directing traffic is using Weighted Routing, as you can do in AWS Route 53.
Initially you have 100% traffic going to server(s) with old code. Then gradually you change that to have some traffic to server(s) with new code.
Also, as you can read in this blog, you can use Docker to achieve it:
Even with the best testing, things can go wrong after deployment and a
rollback may be required. Containers make this easy and we’ve brought
similar tools to the operating system with Project Atomic. Red/Black
deployments can be done throughout the entire stack with Atomic and
Docker.
I think they use Spinnaker to implement a red/black strategy. https://spinnaker.io/docs/concepts/

scalable meanjs on digitalocean

I'm trying to learn a deployment process that can guarantee a headackeless scaling of a meanjs application (not in the level that people do it in big companies, but also not at a hobby level).
So as long as I understood, this could be a solution to work on:
Having mongodb on digitalocean on Ubuntu
Having the meanjs application (all other than mongodb) in a docker
Then one can scale! Because mongodb could be clustered separately and docker keeps the scaling of the application easy.
Well, I know it sounds trivial and that's why I'm asking here: I just want to go and learn docker and want to be sure before investing time on the above assumed solution!
Do you think this guarantee an easy scaling, say, for a simple online multiplayer game on meanjs please? Thank you.
UPDATE 31/07/2018
Digital Ocean introducing Kubernates which does all the orchestration they have also released load balancer which I think will work well with kubernates
==============
There is no off the shelf solution.
You can use docker with swamp but for small deployment it brings additional issues of monitoring and networking.
So here is what I did:
Create a script to generate HAProxy config when you start/stop instance
Have mongo in a cluster or replica or whatever. Database usually does not need to be scaled dynamically. You just have single mongo server then you scale it up and when you can't scale it vertically anymore you scale it horizontally by creating replica set and then scale it up until you can't then you do sharding.
So have HAProxy as load balancer that accepts connections on port 80 and forwards to your droples oven private network.
You can also write scripts to use DO API to create an image with your deployment and fire it up once you have more traffic either dynamically by detecting response time or cpu load or whatever other metric you have or statically.
I hope this helps.