I deployed a single-container SailsJS application with Docker (image size is around 597.4 MB) and have hooked it up to ElasticBeanstalk.
However, since ECS was built for Docker, might it be better to use that over EB?
Elastic Beanstalk (EB) is a PaaS solution in AWS family and it provides very high level concepts: you have applications, versions and you create environments.
EC2 Container service (ECS) is a very low level cluster scheduling platform. You have to manually describe a lot of configuration for your Docker containers, link them and also manually setup load balancers and everything else you need.
So, EB is much simpler to use and maintain. ECS is more complicated, but it uses your resources in a very efficient way.
Also, EB has two different Docker types: single-container and multi-container. Multi-container uses ECS internally.
My advice: use Elastic Beanstalk. ECS is a good fit if you have big number of different applications that you need to run efficiently in a cluster.
Related
I'm planning to deploy a small Kubernetes cluster (3x 32GB Nodes). I'm not experienced with K8S and I need to come up with some kind of resilient SQL database setup and CockroachDB seems like a great choice.
I wonder if it's possible to relatively easy deploy a configuration, where some CockroachDB instances (nodes?) are living inside the K8S cluster, but at the same time some other instances live outside the K8S cluster (2 on-premise VMs). All those CockroachDB would need to be considered a single CockroachDB cluster. It might be also worth noting that Kubernetes would be hosted in the cloud (eg. Linode).
By relatively easy I mean:
simplish to deploy
requiring little maintenance
Yes, it's straight forward to do a multi-cloud deployment of CRDB. This is one of the great advantages of cockroachdb. Simply run the cockroach start command on each of the VMs/pods running cockroachdb and they will form a cluster.
See this blog post/tutorial for more info: https://www.cockroachlabs.com/blog/multi-cloud-deployment/
I'm developing a Kubernetes scheduler and I want to test its performance when nodes join and leave a cluster, as well as how it handles node failures.
What is the best way to test this locally on Windows 10?
Thanks in advance!
Unfortunately, you can't add nodes to Docker Desktop with Kubernetes enabled. Docker Desktop is single-node only.
I can think of two possible solutions, off the top of my head:
You could use any of the cloud providers. Major (AWS, GCP, Azure) ones have some kind of free tier (under certain usage, or timed). Adding nodes in those environments is trivial.
Create local VM for each node. This is less than perfect solution - very resource intesive. To make adding nodes easier, you could use kubeadm to provision your cluster.
I have recently been reading more about infrastructure as a service (IaaS) and platform as a service (PaaS) and had some questions. I see when we opt for a PaaS solution, it is generally very easy to create the infrastructure as the cloud providers handle that for us and we can even automate the deployment using an infrastructure as code solution like Terraform.
But if we use an IaaS solution or even a local on premise cluster, we lose a lot of the automation it seems that PaaS allows. So I was curious, are there any good tools out there for automating infrastructure deployment on a local cluster that is not in the cloud?
The best thing I could think of was to run a local Kubernetes cluster and then Dockerize each of the infrastructure components, but this seems difficult as each node in the cluster will need its own specific configuration files.
From my basic Googling, it seems like there is not a good solution to this.
Edit:
I was not clear enough with my original intentions. I have two problems I am trying to solve.
How do I automate infrastructure deployment locally? For example, suppose I wanted to create a Hadoop HDFS cluster. I would need to configure one node to be the namenode with an accessible IP, and the other nodes to be datanodes that are aware of the namenode's IP. At the moment, I have to do this manually by logging into each node, checking it's IP, and then configuring each one. How would I automate this? If I were to use a Kubernetes approach, how do I specify that one of the running pods needs to be the namenode and the others are datanodes? How do I find the pods' IPs and have them be aware of the namenode IP?
The next problem I have is very similar to the first, but a slight modification. How would I deploy specific configuration files to each node. For instance in Kafka, the configuration file for one node, requires the IPs of the Zookeeper nodes, as well as the IP it should listen on. This may be different for every node in the cluster. Is there a good way to make these config files pod specific, so that I do not have to do bash text processing to insert the correct contents into each pod's config files?
You can use Terraform for all of your on-premise Infra. Automation, and Ansible for configuration management.
Let's say you have three HPE servers, Install K8s or VMware on them using Ansible, then you can treat them as three Avvaliabilty zones in one region, same as AWS. from this you can start deploying dockerize apps, or helm charts using Terraform.
Summary:
Ansbile for installing and configuration K8s.
Terraform for provisioning K8s.
Helm for installing apps on K8s.
After this you gonna have a base automated on-premise Infra.
A Kubernetes Pod and an AWS ECS Task Definition both support multiple different container images: each instance of the pod / task will run all images as containers together.
Does CloudFoundry support a similar concept to allow apps that consist of multiple, separate processes?
Actually, CloudFoundry has a community project for container orchestration tools based on Kubernetes, so that will accept pods the same way Kubernetes does.
You can read more about it here
CloudFoundry also has a CF Application Runtime which is pretty much their PaaS that allows you to deploy applications Heroku style which under the hood run as 'containers'. It's not clear from the docs what type of containers, but I presume you could find out more reading the code, but that's not exposed to the users, neither it's exposed as Pods.
tl;dr
No. You can only run a single container per application instance.
Longer Answer
Most of the answers are quickly pointing you to PKS, however Cloud Foundry itself is outside of that.
Cloud Foundry runs each application via Diego. Each application runs as a standalone container on a diego-cell. This is different from Kubernetes which you think of Pods or groups of colocated containers.
Cloud Foundry allows you to run multiple instances of each container, but I believe this is different from what you are asking.
Workaround
You may not be able to run multiple containers, but you can run multiple processes. For an example of this, check out how CF-FaaS runs. It uses the CF-Space-Security processes in a collocated scheme.
Pivotal now provides PAS - Pivotal Application Service, which is the traditional PaaS.
As a developer, I cf push my archive, the platform creates the container, and the Diego Orchestrator run my application. And yes, I can run multiple instances of my app.
PKS - Pivotal Container Service (cool kids spell with 'K'), is Pivotal's implementation of Kubernetes. It is CaaS - Container as a Service. As a developer, I create my own container - a docker container, or a vendor provides me a container, and PKS runs the container in a POD, inside a PKS cluster.
Next one coming out, some time in next 3 - 6 months, from Pivotal is PFS - Pivotal Functional Service. It is Pivotal's implementation of Function as a Service. As a developer, I can create and deploy a function to PFS. I have to identify the triggers for this function, based on which PFS will spin up new instances of the function, and when done, destroy it.
How you use what, depends on your use case.
This deck is for the presentation at Dallas Cloud Native Meetup's last session. Parth did a great job simplifying and explaining the differences and how you choose. Hope you can access it. Take a look.
I would like to setup a HA swarm / kubernetes cluster based on low power architecture (arm).
My main objective is to learn how works a HA web cluster, how it reacts to failures and recover from them, how easy it is to scale.
I would like to host a blog on it as well as other services once it is working (git / custom services / home automation / CI server / ...).
Here are my first questions:
Regading the hardware, which is the more appropriate ? Rpi3 or Odroid-C2 or something else? I intend to have 4-6 nodes to start. Low power consumption is important to me since it will be running 24/7 at home
What is the best architecure to follow ? I would like to run everything in container (for scalability and redudancy), and have redundant load balancer, web servers and databases. Something like this: architecture
Would it be possible to have web server / databases distributed on all the cluster, and load balancing on 2-3 nodes ? Or is it better to separate it physically?
Which technology is the more suited (swarm / kubernetes / ansible to deploy / flocker for storage) ? I read about this topic a lot lately, but there are a lot of choices.
Thanks for your answers !
EDIT1: infrastructure deployment and management
I have almost all the material and I am now looking in a way to easily manage and deploy the 5 (or more) PIs. I want the procedure to be as scalable as possible.
Is there some way to:
retrieve an image from network the first time (PXE boot like)
apply custom settings for each node: network config (IP), SSH access, ...
automatically deploy / update new software on servers
easily add new nodes on the cluster
I can have a dedicated PI or my PC that would act as deployment server.
Thanks for your inputs !
Raspberry Pi, ODroid, CHIP, BeagleBoard are all suitable hardware.
Note that flash card has a limited lifetime if you constantly read/write to them.
Kubernetes is a great option to learn clustering containers.
Docker Swarm is also good.
None of these solutions provide distributed storage, so if you're talking about a PHP type web server and SQL database which are not distributed, then you can't really be redundant even with Kubernetes or Swarm.
To be effectively redundant, you need master/slave setup for the DB, or better a clustered database like elasticsearch or maybe the cluster version of MariaDB for SQL, so you have redundancy provided by the database cluster itself (which is not a replacement for backups, but it better than a single container)
For real distributed storage, you need to look at technologies like Ceph or GlusterFS. These do not work well with Kubernetes or Swarm because they need to be tied to the hardware. There is a docker/kubernetes Ceph project on Github, but I'd say it is still a bit hacky.
Better provision this separately, or directly on the host.
As far as load balancing is concerned, you ay want to have a couple nodes with external load balancers for redundancy, if you build a Kubernetes cluster you don't really chose what else may run on the same node, except by specifying CPU/RAM quota and limits, or affinity.
If you want to have a try to Raspberry Pi 3 in Kubernetes here is a step by step tutorial to setup your Kubernetes cluster with Raspberry Pi 3:
To prevent de read/write issue, you might consider purchasing and additional NAS device and mount it as volume to yours pods
Totally agree with MrE with the distributed storage for PHP-like. Volume lifespan is per pod, and is tied to the pod. So you cannot share one Volume between pods.