What are the benefits of building an Android application with Kubernetes/Containers - kubernetes

I will be building an Android application (not a game) soon. I heard of containerized development and Docker/Kubernetes but I'm not well-versed in its functions and use cases.
Why should I build my Android application with Kubernetes?

Your question can be split up into two parts:
1. Why should I containerize my deployment?
I hope by "deployment", you are referring to the backend services that serve your Android application; not the application itself (not sure how one would do that...). Here is a good article.
Containerization is a powerful abstraction that can help you manage both your code and environment. Setting up a container with the correct dependencies, utilities etc., and securing them is a lot of work, as is the case with any server setup. However, once you have packaged everything into a container, you can deploy said container multiple times and build on-top of it. The value of the grunt work that you have done in the past is therefore carried forward in your future deployments; conversely, so are the bugs... Additionally, you can also leverage the Docker ecosystem and build on various community contributions greatly accelerating your workflows.
A possible unintended advantage is also protection against configuration drift. Whenever services fail or your application crashes, you can simply restart your container, and a fresh version of the service will be created again. However, to support these operations, you need to ensure that your containerized service behaves nicely across restarts and fails gracefully. There are many other caveats and advantages that are not listed here; you can find more discussion on Google.
2. Why should I use Kubernetes for my container orchestration?
If you have many containers (think in the order of 100s), then using a single-node solution like Docker/docker-compose to manage them becomes tedious.
If only there was a tool to manage across multiple nodes, implement service discovery between your nodes, have fault tolerance (ie. automatic restarts, backoff policies), do health-checking of your services, manage storage assets, and conveniently expose your containers to the public. That tool is Kubernetes.
Here is a more in-depth intro.
Hope this helps!

Related

local development of microservices, methods and tools to work efficiently

I work with teams members to develop a microservices architecture but I have a problem with the way to work. Indeed, I have too many microservices and when I run them during my development, it consumes too memory even with a good workstation. So I use docker compose to build and execute my MSA but it takes a long time. One often hears about how technically build an MSA but never about the way to work efficiently to build it. How do you do in this case ? How do you work ? Do you use tools or any others to improve and facilitate your developments. I've heard about skaffold but I don't see what the difference is with docker compose or with a simple ci/cd in a cluster env for example. Feel free to give tips and your opinion. Thanks
I've had a fair amount of experience with microservices and local development and here's been some approaches I've seen:
Run all the things locally on docker or k8. If using k8, then a tool like skaffolding can make it easier to run and debug a service locally in the IDE but put it into your local k8 so that it can communicate with other k8 services. It works OK but running more than 4 or 5 full services locally in k8 or docker requires dedicating a substantial amount of CPU and memory.
Build mock versions of all your services. Use those locally and for integration tests. The mock services are intentionally much simpler and therefore easier to run lots of them locally. Obvious downside is that you have to build mock version of every service, and you can easily miss bugs that are caused by mock services not behaving like the real service. Record/replay tools like Hoveryfly can help in building mock services.
Give every developer their own Cloud environment. Run most services in the cloud but use a tool like Telepresence to swap locally running services in and out of the cloud cluster. This eliminates the problem of running too many services on a single machine but can be spendy to maintain separate cloud sandboxes for each developer. You also need a DevOps resource to help developers when their cloud sandbox gets out of whack.
Eliminate unnecessary microservice complexity and consolidate all your services into 1 or 2 monoliths. Enjoy being able to run everything locally as a single service. Accept the fact that a microservice architecture is overkill for most companies. Too many people choose a microservice architecture upfront before their needs demand it. Or they do it out of fear that they will need it in the future. Inevitably this leads to guessing how they should decompose the system into many microservices, and getting the boundaries and contracts wrong, which makes it just as hard or harder to fix in the future compared to a monolith. And they incur the costs of microservices years before they need to. Microservices make everything more costly and painful, from local development to deployment. For companies like Netflix and Amazon, it's necessary. For most of us, it's not.
I prefer option 4 if at all possible. Otherwise option 2 or 3 in that order. Option 1 should be avoided in my opinion but it is probably the option everyone tries first.
In GKE and assuming you have a private cluster. You can utilize port forwarding while hooked up to the GKE environment through the CLI. Create a script that forwards your local ports to the GKE environment. I believe on the services tab in your cluster is where you will find the "port-forwarding" button that will give you the CMD command. This way you can work on one microservice with all of its traffic being routed to the actual DEV cluster. This prevents you from having to run multiple projects at the same time.
I would say create a staging environment which will have all services running. This staging environment will specifically be curated for development. E.g. if it's deployed using k8s then you expose some ports using nodeport service if you need them for your specific microservice. And have a DevOps pipeline to always keep this environment up to date with the code.
This environment should always be built from master branch. If you have single repo for app or repo per service, it's fair assumption that the will always have most recent code when you create your dev/feature branch.
Then when you want to develop a feature or fix a bug you checkout your microservice. And if you are following the microservice pattern appropriately, that single microservice should be an executable and have it's own docker file and should be debuggable from your local IDE. Many enterprises follow this pattern, and enforce at the organization level that the master branch is always production ready and high quality.
Let's say, you discover a bug in some other microservice running in k8s cluster. You will very likely get tempted to find a way to debug that remote microservice. However, that should be written as a bug for the team that owns the microservice. If your team owns it then you fix it and then start working on your feature. If you really think you need to debug multiple microservices, then I think you have real tight coupling between the services or you don't really need the microservice architecture.

Running an Elixir program in a Kubernetes swarm

I'm fairly new to Elixir and container orchestration technologies like Kubernetes. I know that Elixir runs on the BEAM, which I've heard makes it possible to easily run programs on a network of computers. From what I understand of Kubernetes, Kubernetes manages a swarm of containers in a virtual network.
My question: would it be a good idea to use Kubernetes to manage a scalable swarm of containers running an Elixir program? If so, are there any pre-configured Kubernetes setups that make this easy?
We just moved to kubernetes for container management at my startup. I have a couple opinions on this:
If the goal is simply to distribute an elixir application over a "swarm" of nodes and / or machines, Kubernetes adds quite a bit of complexity and plumbing towards that goal. You end up having not just the learning curve of BEAM distribution, but of the kubernetes ecosystem itself.
However, kubernetes and elixir applications can work really well together, especially if your application deployment is more complex than pure elixir projects. For example we have custom services run from javascript, python, elixir, and golang in our application deployment (not too mention a dozen random tools).
Kubernetes helps extend the ability to distribute and manage nodes for all of these services with one strategy.
To answer your question: it can be a good idea, but I'd recommend first learning to manage the elixir program distributed by itself. Its not quite as simple as everyone makes it out to be, although its better than most programming technologies for other languages IMO. Here's a good guide to getting started with distributed elixir/erlang: http://engineering.pivotal.io/post/how-to-set-up-an-elixir-cluster-on-amazon-ec2/.
You can isolate that learning curve, and once you feel comfortable with it move on to kubernetes. Here's a really good resource on getting started with swarming via kubernetes: http://bitwalker.org/posts/2016-08-04-clustering-in-kubernetes/
Oops, one final comment: the advantages of tackling both learning curves are that you end up with the power of distributed elixir, the rest of your application deployment can keep up, and the kubernetes echosystem enables a lot of very cool tech like auto-scaling, helping unlock you out of a cloud provider, etc.
Our team had used elixir builded binary and docker/consul for three years.
When the servers per service are less than three, just used containers to manage are fine.
Otherwise, it is painful to scale your services(servers are expensive and you can not figure out the pressure of your services).
I had experienced this and tried using Swarm. It works like a charm! You can easily scale your services and manage servers.
So, when you meeting many servers, it is a good idea to use Kubernetes to manage a scalable swarm of containers running an Elixir program.

What does Apache Mesos do that Kubernetes can't do and vice-versa?

What does Apache Mesos do that Kubernetes can't do or vice-versa?
Mesos is a Two level scheduler. Sure it grabs resource information from every machine and gives it to the top level scheduler such that frameworks like kubernetes can use to schedule containers across machines but Kubernetes can itself schedule containers across machines (No need for Mesos from this regard). so what are few things that Apache Mesos can do that Kubernetes cannot do or vice-versa?
Both Mesos and Kubernetes are n-th level containers orchestrators. This means you can achieve the same features but some kind of tasks could be done easier (read. better) on one of them. In fact, you can run Kubernetes on Mesos and vice verse.
Let's go through main differences that give some clue when you need to make a decision:
Architecture
As you pointed out Mesos is a Two-Level Scheduler and this is the main difference in architecture. This gives you the ability to create your custom scheduler (aka framework) to run your tasks. What's more, you can have more than one scheduler. All your schedulers compete for the resources that are fairly distributed using Dominant Resources Fairness algorithm (that could be replaced with custom allocator). You can also assign roles to the frameworks and tasks and assign weights to this roles to prioritize some schedulers. Roles are tightly connected with resources. Above features gives you the ability to create your own way of scheduling for different applications (e.g., Fenzo) with different heuristics based on a type of tasks you want to run. For example, when running batch tasks it's good to place them near data and time to start is not so important. On the other hand, running stateless services is independent of nodes and it's more critical to run them ASAP.
Kubernetes architecture is a single level scheduler. That means decisions where pod will be run are made in a single component. There is no such thing as resource offer. On the other hand, everything there is pluggable and built with a layered design.
Origin
Mesos was created at Twitter (formerly at Berkeley but the first production usage was at Twitter) to support their scale.
In March 2010, about a year into the Mesos project, Hindman and his Berkeley colleagues gave a talk at Twitter. At first, he was disappointed. Only about eight people showed up. But then Twitter's chief scientist told him that eight people was lot – about ten percent of the company's entire staff. And then, after the talk, three of those people approached him.
Soon, Hindman was consulting at Twitter, working hand-in-hand with those ex-Google engineers and others to expand the project. Then he joined the company as an intern. And, a year after that, he signed on as a full-time employee.
source
Kubernetes was created by Google to bring users to their cloud promising no lock-in experience. This is the same technique Amazon did with Kindle. You can read any book on it but using it with Amazon gives you the best experience. The same is true for Google. You can run Kubernetes on any cloud (public or private) but the best tooling, integration and support you'll get only on Google Cloud.
But Google and Microsoft are different. Microsoft wants to support everything on Azure, while Google wants Kubernetes everywhere. (In a sense, Microsoft is living up to the Borg name, assimilating all orchestrators, more than Google is.) And quite literally, Kubernetes is how Google is playing up to the on-premises cloud crowd giving it differentiation from AWS (which won’t sell its infrastructure as a stack with a license, although it says VMware is its private cloud partner) and Microsoft (which still doesn’t have its Azure Stack private cloud out the door). source
Community
Judging a project simply by its community size could be misleading. It's like you'd be saying that php is a great language because it has large community.
Mesos community is much smaller than Kubernetes. That's the fact. Kubernetes has financial support from many big companies including Google, Intel, Mirantis, RedHat and more while Mesos is developed mainly by Mesosphere with some support from Apple, Microsoft. Although Mesos is a mature project, its development is slow but stable. On the other hand, Kubernetes is much younger, but rapidly developed.
Meso contributors origin
The Kubernetes Community - Ian Lewis, Developer Advocate, Google
Scale
Mesos was targeted for big customers from the early beginning. It is used at Twitter, Apple, Verizon, Yelp, Netflix to run hundreds of thousands of containers on thousands of servers.
Kubernetes was started by Google to give developers Google Infrastructure experience (GIFFE). From the beginning, it was prepared for small scale up to hundreds of machines. This constraint is increased with every release but they started small to grow big. There are no public data about biggest Kubernetes installation.
Hype
Due to scale issues, Kuberntetes started to be popular among smaller companies (not cloud scale) while Mesos was targeted for enterprise users. Kubernetes is supported by Cloud Native Foundation while Mesos is Apache Foundation Project. These two foundations have different founding and sponsors. Generally, more money gives you better marketing and Kubernetes definitely did it right.
https://g.co/trends/RUuhA
Conclusion
It looks like Kubernetes already won the containers orchestrator war. But if you have some custom workloads and really big scale, Mesos could be a good choice.
The main difference is in the community size and the open source model : where DCOS is supported by Mesosphere and provide enterprise features in a commercial product only (because mesosphere isn't philanthropist), K8S has a larger community with strong contributions from different companies resulting in providing much more integrated enterprise features (multitenancy, RBAC, quota, preemption, gateways...) meaning they are easier to use, not necessarily they don't exist in DCOS.
I would globally say that :
DCOS is more battle tested for stateful and big data workloads but lacks of integration with other perimetric components including plug and play central monitoring and logging and enterprise features like security model, multi tenancy, auto updates... It was a very hard way to integrate everything for a production grade platform.
K8S is more battle tested for stateless apps and provides lots of plug and play tools like prometheus, EFK, helm... which makes the implementation of a production grade platform much easier. Next to that there is a big move on stateful workloads with statefulsets and the operator pattern which is comparable with mesos frameworks but again, K8S provides lots of tools to develop them with less costs because lots of functionalities are provided out of the box, it takes me 2 months to develop a MongoDB operator to provide MongoDB as a service in a multi tenant and secured way and I needed to learn Golang in the same time.
source
https://www.infoworld.com/article/3118345/cloud-computing/why-kubernetes-is-winning-the-container-war.html
https://www.theregister.co.uk/2017/10/17/docker_ee_kubernetes_support
https://www.techrepublic.com/article/these-two-vendors-are-most-likely-to-bring-kubernetes-containers-to-the-enterprise
https://www.cloudhealthtech.com/blog/container-wars-are-over-kubernetes-has-won
https://news.ycombinator.com/item?id=12462261

Micro services with JBOSS

I am new to Jboss, want to know if micro services architecture is a right choice on JBOSS. I cannot change the application server as it is decided by client architect and I have no choice.
Want to know whether we can develop micro services with underlying JBOSS application server.
I understand Spring boot comes with embedded tomcat container, which makes it flexible to stop and start, deploy individual service with no impact to other services.
However will that architecture works with JBoss too.
Please suggest.
Thanks,
I actually developed a feasibility study to investigate the solution you mentioned. My conclusion is that it is totally viable to use Micro Service principles in a JBoss Platform.
I used the combination of JBoss \ Spring Boot \ Netflix to create successful Micro Service stack, I personally do that to find a solution to the transaction problem (multiple micro services collaborating) and the fan out problem which caused because excessive Network communication and Serialization costs.
I also wrote a blog about the subject, you might find more details there if you like to, here is the link.
Micro Services – Fan Out, Transaction Problems and Solutions with Spring Boot/JBoss and Netflix Eureka
By the definition what micro services are, then conceptually yes. A micro service is a service that is an independent unit, it could deployed, updated, and undeployed independently without affecting any unrelated part of your application. So that would mean having multiple instances of JBoss for MS and your application calling them through some sort of gateway or any other mechanism depending on your use case. If you plan to deploy all your MS in the same JBoss instance then it defeats the very purpose of a MS. Given that, JBoss wouldn't be a right choice for MS deployment because it will only make your MS deployment infrastructure quite heavy.
Depending on what your client's requirements are, your could possibly keep your webapp in JBoss and deploy your MS containers separately.
It depends on what you want to get out of microservices.
Some of the developers at my organisation looked at Spring Boot but concluded that it's best off being run as a standalone container rather than in JBoss, otherwise you've effectively got two container frameworks competing (SB and JBoss) and a range of associated issues.
Deploying microservices in JBoss won't give you the same flexibility as a true container system like Docker. With Docker you create standalone packages for your microservices that contain all the code, system tools, runtime environment, etc. It can be as small or large as it needs to be. JBoss on the other hand is a large container running a single JVM designed to hold multiple applications. The level of isolation is not the same, and it's not efficient to have JBoss as a container for a single microservice so you have to appropriately size and then deploy to the instance to make use of the resources it has available.
If you're looking at microservices as a way to gain greater control over service lifecycle management (deployment, versioning, deprecating, etc.) as opposed to an automated, web-scale component deployment model a la Netflix or LinkedIn, you could do this adequately with JBoss.
I'm actually looking to do something along these lines here. It won't be true microservices but by packaging and deploying individual, properly versioned APIs rather than monolithic applications and following most of the other principles of microservice development (componentisation, business function focus, stateless etc.) we will be hopefully better able to manage and benefit from our APIs.
Our APIs will all be behind an API gateway and load balancer so we can choose how we distribute the microserves distributed across the JBoss instances and balance resource usage as required. Note that our organisation is relatively small and has relatively low and predictable traffic so this approach should work fine. Your needs however may be different.

Docker deployment options

I'm wondering which options are there for docker container deployment in production. Given I have separate APP and DB server containers and data-only containers holding deployables and other holding database files.
I just have one server for now, which I would like to "docker enable", but what is the best way to deploy there(remotely will be the best option)
I just want to hit a button and some tool will take care of stopping, starting, exchanging all needed docker containers.
There is myriad of tools(Fleet, Flocker, Docker Compose etc.), I'm overwhelmed by the choices.
Only thing I'm clear is, I don't want to build images with codes from git repo. I would like to have docker images as wrappers for my releases. Have I grasped the docker ideas from wrong end?
My team recently built a Docker continuous deployment system and I thought I'd share it here since you seem to have the same questions we had. It pretty much does what you asked:
"hit a button and some tool will take care of stopping, starting, exchanging all needed docker containers"
We had the challenge that our Docker deployment scripts were getting too complex. Our containers depend on each other in various ways to make the full system so when we deployed, we'd often have dependency issues crop up.
We built a system called "Skopos" to resolve these issues. Skopos detects the current state of your running system and detects any changes being made and then automatically plans out and deploys the update into production. It creates deployment plans dynamically for each deployment based on a comparison of current state and desired state.
It can help you continuously deploy your application or service to production using tags in your repository to automatically roll out the right version to the right platform while removing the need for manual procedures or scripts.
It's free, check it out: http://datagridsys.com/getstarted/
You can import your system in 3 ways:
1. if you have a Docker Compose, we can suck that in and start working iwth it.
2. If your app is running, we can scan it and then start working with it.
3. If you have neither, you can create a quick descriptor file in YAML and then we can understand your current state.
I think most people start their container journey using tools from Docker Toolbox. Those tools provide a good start and work as promised, but you'll end up wanting more. With these tools, you are missing for example integrated overlay networking, DNS, load balancing, aggregated logging, VPN access and private image repository which are crucial for most container workloads.
To solve these problems we started to develop Kontena - Docker Container Orchestration Platform. While Kontena works great for all types of businesses and may be used to run containerized workloads at any scale, it's best suited for start-ups and small to medium sized business who require worry-free and simple to use platform to run containerized workloads.
Kontena is an open source project and you can view it on GitHub.