I have graylog server running as a docker container on my host. Now I want to run a graylog cluster using docker swarm. Is it possible to do it. If possible should i go with an extra load balancer like nginx.
It is possible, I am running Graylog in a Swarm environment as well. Have you checked https://hub.docker.com/r/graylog2/graylog/? It has a Docker Compose template which you can adjust to fit your needs and have it deployed with docker stack deploy.
You do not need an extra Load Balancer by default, but you can add one, of course. I am using an HAproxy to terminate SSL and proxy requests into the container.
Related
Is there a way to load a single image into a Kubernetes Cluster without going through Container Registries?
For example, if I build an image locally on my laptop, can kubectl do something akin to docker save/load to make that image available in a remote Kubernetes cluster?
I don't think kubectl can make a node to load up an image that was not built on itself, but I think you can achieve it in a similar way with docker Daemon CLI(make remote worker node to build image from local environment).
Something like:
$ docker -H tcp://0.0.0.0:2375 build <Dockerfile>
or setup docker host as environment in your local(laptop) environment.
$ export DOCKER_HOST="tcp://0.0.0.0:2375"
$ docker ps
Keep in mind that your remote worker node needs to be accessible to all the dependencies to build the image
See documentation
Plus, I am not sure why you want to work around remote repository but if the reason is because you don't want to expose your image in public, I suggest you setup a custom docker registry in long term.
Kubernetes requires container images to be in a registry - public or private, running in the cluster itself as a pod/container or remote with respect to the cluster. Even when the registry is on one of the cluster nodes - something often times used with local minikube - the registry is referenced by node's IP/hostname.
In order for a remote Kubernetes cluster to pull image from your local laptop you'd have to be running a registry locally (say, via docker run -d -p 5000:5000 --name registry registry:2) and have your laptop be reachable across the network.
Notice that either securing the local registry with trusted cert and key as from Let's Encrypt or other reputable CA will be required or if running insecure registry then Docker on the Kubernetes cluster nodes has to be configure to trust your insecure registry.
I am new to Google Cloud Platform and the following context:
I have a Compute Engine VM running as a MongoDB server and a Compute Engine VM running as a NodeJS server already with Docker. Then the NodeJS application connects to Mongo via the default VPC internal IP. Now, I'm trying to migrate the NodeJS application to Google Kubernetes Engine, but I can't connect to the MongoDB server when I deploy the NodeJS application Docker image to the cluster.
All services like GCE and GKE are in the same region (us-east-1).
I did a hard test accessing a kubernetes cluster node via SSH and deploying a simple MongoDB Docker image and trying to connect to the remote MongoDB server via command line, but the problem is the same, time out when trying to connect.
I have also checked the firewall settings on GCP as well as the bindIp setting on the MongoDB server and it has no blocking on that.
Does anyone know what may be happening? Thank you very much.
In my case traffic from GKE to GCE VM was blocked by Google Firewall even thou both are in the same network (default).
I had to whitelist cluster pod network listed in cluster details:
Pod address range 10.8.0.0/14
https://console.cloud.google.com/kubernetes/list
https://console.cloud.google.com/networking/firewalls/list
By default, containers in a GKE cluster should be able to access GCE VMs of the same VPC through internal IPs. It is just like you access the internet (e.g., google.com) from GKE containers, GKE and VPC know how to route the traffic. The problem must be with other configurations (firewall or your application).
You can do a test, start a simple HTTP server in the GCE VM, say the internal IP is 10.138.0.5:
python -m SimpleHTTPServer 8080
then create a GKE container and try to access the service:
kubectl run my-client -it --image=tutum/curl --generator=run-pod/v1 -- curl http://10.138.0.5:8080
I have minikube running kubernetes inside a virtual box.
one of the docker container it runs is an ignite server.
during my development I try to access the ignite server from outside java client but the discovery fails with all configurations I tried.
is it possible at all?
If yes can someone give an example?
To enable Apache Ignite nodes auto-discovery in Kubernetes, you need to enable TcpDiscoveryKubernetesIpFinder in IgniteConfiguration. Read more about this on https://apacheignite.readme.io/docs/kubernetes-deployment. Your Kubernetes service definitions should have the container exposed port specified, then minikube should give you service URL after successful deployment.
I forgot to set export KUBE_ENABLE_INSECURE_REGISTRY=true when running kube-up.sh (AWS provider). I was wondering if there was anyway to retroactively apply that change to a running cluster. It is only a 3 node cluster so doing it manually is an option. Or is the only way to tear down the cluster and start from scratch?
I haven't tested it but in theory you just need to add --insecure-registry 10.0.0.0/8 (if you are running your insecure registry in the kube network 10.0.0.0/8) to the docker daemon options (DOCKER_OPTS).
You can also specify the url instead of the network.
I started learning about Docker. But I keep getting confused often, even though I read it in multiple places.
Docker Host and Docker Container.
Docker Engine is the base Engine that handles the containers.
Docker Containers sit on top of Docker engine. This is created by recipes (text file with shell script). It pulls the image from the hub and you can install your stuff on it.
In a typical application environment, you will create separate containers for each piece of the system, Application Server, Database Server, Web Server, etc. (one container for each).
Docker Swarm is a cluster of containers.
Where does the Docker Host come in? Is this another word for Container or another layer where you can keep multiple containers together?
Sorry may be a basic question.
I googled this, but no use.
The docker host is the base traditional OS server where the OS and processes are running in normal (non-container) mode. So the OS and processes you start by actually powering on and booting a server (or VM) are the docker host. The processes that start within containers via docker commands are your containers.
To make an analogy: the docker host is the playground, the docker containers are the kids playing around in there.
Docker Host is the machine that Docker Engine is installed.
Here's a picture, which I find easier to understand than words. I found it here.
The Host is the machine managing the containers and images, where you actually installed Docker.
Docker host is the machine where you installed the docker engine. the docker container can be compared with a simple process running on that same docker host.
The Host is the underlying OS and it's support for app isolation (ie., process and user isolation via "containers." Docker provides an API that defines a method of application packaging and methods for working for the containers.
Host = container implementation
Docker = app packaging and container management