I want to use following deployment architecture.
One machine running my webserver(nginx)
Two or more machines running uwsgi
Postgresql as my db on another server.
All the three are three different host machines on AWS. During development I used docker and was able to run all these three on my local machine. But I am clueless now as I want to split those three into three separate hosts and run it. Any guidance, clues, references will be greatly appreciated. I preferably want to do this using docker.
If you're really adamant on keeping the services separate on individual hosts then there's nothing stopping you from still using your containers on a Docker installed EC2 host for nginx/uswgi, you could even use a CoreOS AMI which comes with a nice secure Docker instance pre-loaded (https://coreos.com/os/docs/latest/booting-on-ec2.html).
For the database use PostgreSQL on AWS RDS.
If you're running containers you can also look at AWS ECS which is Amazons container service, which would be my initial recommendation, but I saw that you wanted all these services to be on individual hosts.
you can use docker stack to deploy the application in swarm,
join the other 2 hosts as worker and use the below option
https://docs.docker.com/compose/compose-file/#placement
deploy:
placement:
constraints:
- node.role == manager
change the node role as manager or worker1 or workern this will restrict the services to run on individual hosts.
you can also make this more secure by using vpn if you wish
Related
I'd like to run two instances of k3s on the same machine.
Let's say I have already two different master servers.
What I'd like to do is to install:
A K3S instance to deal with Master #1.
A second instance to deal with Master #2.
First instance will be managed by myself while the second instance will be managed by my customer.
Is there a way to setup two instances of K3S on the same machine?
Running multiple instances of k3s is not supported out of the box.
You can however use tools like k3d (running inside docker container), or multipass-k3s (running inside a VM).
chroot jail is not really an option. Rootless K3s is still in experimental stage [reference], and applications running as root can easily break out of jail.
I have a Kubernetes cluster setup on DigitalOcean and a separate database Postgres instance there. In database cluster settings there is a list of limited IP addresses that have an access to that database cluster (looks like a great idea).
I have a build and deploy proccess setup with CircleCI and at the end of that process, after deploying a container to K8s cluster, I need to run database migration. The problem is that I don't know CircleCI agent IP address and can not allow it in DO settings. Does anybody know how we can access DigitalOcean Postgres cluster from within CircleCI steps?
Unfortunately when you use a distributed service like that that you don't manage, I would be very cautious about using the restricted IP approach. (Really you have three services you don't manage - Postgres, Kubernetes, and CircleCI.) I feel as if DigitalOcean has provided a really excellent security option for internal networking, since it can track changes in droplet IP, etc.
But when you are deploying on another service, especially if this is for production, and even if the part of your solution you're deploying is deployed (partially) on DigitalOcean infrastructure, I'd be very concerned that CircleCI will change IP dynamically. DO has no way of knowing when this happens, as unlike Postgres and Kubernetes, they don't manage it even if they do host part of it
Essentially I have to advise you to either get an assurance of a static IP from your CircleCI vendor/provider, or disable the IP limitation on Postgres.
What i want do is deployment of multiple container application in...
In RHEL os
RedHat Supportable product (if possible)
In single node K8S cluster (Bare metal machine)
So I found several way but I concerned about..
minikube, minishift, OKD, CodeReady Container
First, they run in VM but what I want is run in HOST.
Second, their doc said they are not for production environment.
So, Is there any PaaS for single-node cluster as production environment?
Docker, Docker-compose
Deployment target OS should maybe RHEL8. I guess it is not good idea to use docker because RedHat product is moving away from docker. Even in RHEL8 repository, there is no docker rpm for el8 yet.
My question is
Is there any PaaS for single-node cluster as production environment?
If not exist, docker-compose is best?
It was already mentioned, you should not use single node setup in production environment.
You should not do that because, if your servers drops you have service offline. There is nothing to switch to, nothing that might continue the process that was being worked on.
If you still want to setup a single node Kubernetes cluster you can do that using kubeadm. I think this would be closest to production grade as you can get.
Other then that as an alternative you can play with Installing Kubernetes with Minikube or Install a local Kubernetes with MicroK8s.
It's up to you which one you will choose but you need to remember this should not be running as a production, this should be a lab or a test environment which if works as expected will be migrated into few node production grade cluster.
As for PaaS as a single node there is Dokku.
Docker powered mini-Heroku. The smallest PaaS implementation you've ever seen.
And if you would consider using a cloud for PaaS, you can choose from AWS Cloud9, Azure App Service or Google App Engine.
Single node cluster is not recommended for production applications. You need scalability, high availability, fault tolerance for production apps. You must have more than one node to have these features.
Our company use Kubernetes in all our environments. as well as on our local Macbook using minikube.
We have many microservices and most of them are running JVM which require a large amount of memory. We started to face an issue that we cannot run our stack on minikube due to out of memory of the local machine.
We thought about multiple solutions:
the first was to create a k8s cloud development environment and when a developer is working on a single microservice on his local macbook he will redirect the outbound traffic into the cloud instead of the local minikube. but this solution will create new problems:
how a pod inside the cloud dev env will send data to the local developer machine? its not just a single request/response scenario
We have many developers, they can overlap each other with different versions of each service they need to be deploy on the cloud. (We can set each developer separate namespace but we will need a huge cluster to support it)
The second solution was maybe we should use a tools like skaffold or draft to deploy our current code into the cloud development environment. that will solve issue #1 but again we see problems:
Slow development cycle - building a java image and push to remote cloud and wait for init will take too much time for developer to work.
And we still facing issue #2
Antoher though was, kubernetes support multiple nodes, why won't we just add another node, a remote node that sit on the cloud, to our local minikube? The main issue is that minikube is a single node solution. Also, we didn't find any resources for it on the web.
Last thought was to connect minikube docker daemon to a remote machine. so we will use minikube on the local machine but the docker will run the containers on a remote cloud server. But no luck so far, minikube crush when we do this manipulate. and we didn't find any resources for it on the web as well.
Any thought how to solve our issue? Thank you!
I'm looking for some clues on which solution can help me on my problem which is running a kubernetes cluster between 2 VMs.
I'm beginning with Kubernetes and all its possibilities but like everybody I started from a minikube single-node cluster to host my 4 containers respectively hosting mongoDB, redis, rabbitMQ and minio.
The idea is that I need something like minikube to create a cluster like this:
Moreover, these 2 VMs will run on RedHat EL 7 and won't be local and they may be hosted on different machines
Is it possible to build that architecture with kubeadm?
Response is yes, it can be achieved with kubeadm, it will allow you to aggregate your different VM/hosts into the cluster in the same way docker-swarm does by kind of subscribing them into the cluster.
See how (thanks #Jason Stanley)
Some Prerequisites
If like me you're on a client infrastructure, implementing your solution 'on-premise' on a Red Hat environment, you'll need two or three things.
The proper docker environment
An image registry
An easy way to configure your cluster
Docker environment
First one and because it's a restriction meant by docker and red hat is to use docker-ee the enterprise edition which allow support on prod by red hat, which is why most people pay a red hat subscription.
See typical architecture using docker-ee and how to install that on Red Hat
Local image registry
The other thing is to deploy a docker registry, this is an official image (documentation here) provided by Docker team, registry will allow your cluster to pull the image from it - images that you will push inside while setting everything up - note that this is relevant in a restricted environment where your VM/hosts can't have access to internet or use a proxy.
Cluster setting tool
One handy tool is using helm which allow you to set your cluster easily by referencing which images provide a given service, on which port with which policy.
Helm documentation