Currently I am facing the following problem:
I set up 3 virtual box machines with a debian and installed docker. No firewall in place.
I created a swarm making one machine the manager and joined the other two as workers as described in countless web pages. Works perfect.
On the swarm manager I activated a remote API access via -H :4243... and restarted the deamon. (only on the swarm manager)
'docker node ls' qualifies all nodes being active.
When I call http://:4243/nodes I see all nodes.
I created an overlay network (most likely not needed to illustrate my problem. Standard Ingress Networking should be ok too)
Then I created a service with 3 replica. specifying a name, my overlay network and some env params.
'docker service ps ' gives me the info that each node runs one container with my image.
Doublechecking with 'docker ps' on each node says the same.
My problem is:
Calling 'http://:4243/containers/json' I only see one container, the one on the swarm manager.
I expect to see 3 containers, one for each node. The question is why ?
Any ideas ?
This Question does not seem to be my problem
Listing containers via /containers/json only shows "local" containers on that node. If you want a complete overview of every container on every node, you'll need to use the swarm aware endpoints. Docker Services are the high level abstraction, while Tasks are the container level abstraction. See https://docs.docker.com/engine/api/v1.30/#tag/Task for reference.
If you perform a request on your manager node at http://:4243/tasks you should see every task (aka container), on which node they are running, and to which service they belong to.
Related
I have a k8s cluster that runs the main workload and has a lot of nodes.
I also have a node (I call it the special node) that some of special container are running on that that is NOT part of the cluster. The node has access to some resources that are required for those special containers.
I want to be able to manage containers on the special node along with the cluster, and make it possible to access them inside the cluster, so the idea is to add the node to the cluster as a worker node and taint it to prevent normal workloads to be scheduled on it, and add tolerations on the pods running special containers.
The idea looks fine, but there may be a problem. There will be some other containers and non-container daemons and services running on the special node that are not managed by the cluster (they belong to other activities that have to be separated from the cluster). I'm not sure that will be a problem, but I have not seen running non-cluster containers along with pod containers on a worker node before, and I could not find a similar question on the web about that.
So please enlighten me, is it ok to have non-cluster containers and other daemon services on a worker node? Does is require some cautions, or I'm just worrying too much?
Ahmad from the above description, I could understand that you are trying to deploy a kubernetes cluster using kudeadm or minikube or any other similar kind of solution. In this you have some servers and in those servers one is having some special functionality like GPU etc., for deploying your special pods you can use node selector and I hope you are already doing this.
Coming to running separate container runtime on one of these nodes you need to consider two points mainly
This can be done and if you didn’t integrated the container runtime with
kubernetes it will be one more software that is running on your server
let’s say you used kubeadm on all the nodes and you want to run docker
containers this will be separate provided you have drafted a proper
architecture and configured separate isolated virtual network
accordingly.
Now comes the storage part, you need to create separate storage volumes
for kubernetes and container runtime separately because if any one
software gets failed or corrupted it should not affect the second one and
also for providing the isolation.
If you maintain proper isolation starting from storage to network then you can run both kubernetes and container runtime separately however it is not a suggested way of implementation for production environments.
I see that kubernets uses pod and then in each pod there can be multiple containers.
Example I create a pod with
Container 1: Django server - running at port 8000
Container 2: Reactjs server - running at port 3000
Since the containers inside cant have port conflicts, then its better to put all of them in one containers. Because I see the advantage of using containers is no need to worry about port conflict.
Container 1: BOTH Django server - running at port 8000 and Reactjs server - running at port 3000
No need of container2.
and also
When i run different docker containers on my PC i cant access them like local host
But then how is this possible inside a POD with multiple containers.
Whats the difference between the docker containers run on PC and inside a POD.
The typical way to think about this delineation is "which parts of my app scale together?"
So for your example, you probably wouldn't even choose a common pod for them. You should have a Django pod and separately, a ReactJS server pod. Thus you can scale these independently.
The typical case for deploying pods with multiple containers is a pattern called "sidecar", where the added container enhances some aspect of the deployed workload, and always scales right along with that workload container. Examples are:
Shipping logs to a central log server
Security auditing
Purpose-built Proxies - e.g. handles DB connection details
Service Mesh (intercepts all network traffic and handles routing, circuit breaking, load balancing, etc.)
As for deploying the software into the same container, this would only be appropriate if the two pieces being considered for co-deployment into the same container are developed by the same team and address the same concerns (that is - they really are only one piece when you think about it). If you can imagine them being owned/maintained by distinct teams, let those teams ship a clean container image with a contract to use networking ports for interaction.
(some of) The details are this:
Pods are a shared Networking and IPC namespace. Thus one container in a pod can modify iptables and the modification applies to all other containers in that pod. This may help guide your choice: Which containers should have that intimate a relationship to each other?
Specifically I am referring to Linux Namespaces, a feature of the kernel that allows different processes to share a resource but not "see" each other. Containers are normal Linux processes, but with a few other Linux features in place to stop them from seeing each other. This video is a great intro to these concepts. (timestamp in link lands on a succinct slide/moment)
Edit - I noticed the question edited to be more succinctly about networking. The answer is in the Namespace feature of the Linux kernel that I mentioned. Every process belongs to a Network namespace. Without doing anything special, it would be the default network namespace. Containers usually launch into their own network namespace, depending on the tool you use to launch them. Linux then includes a feature where you can virtually connect two namespaces - this is called a Veth Pair (Pair of Virtual Ethernet devices, connected). After a Veth pair is setup between the default namespace and the container's namespace, both get a new eth device, and can talk to each other. Not all tools will setup that veth pair by default (example: Kubernetes will not do this by default). You can, however, tell Kubernetes to launch your pod in "host" networking mode, which just uses the system's default network namespace so the veth pair is not even required.
Background: Have approx 50 nodes "behind" a namespace. Meaning that a given Pod in this namespace can land on any of those 50 nodes.
The task is to test if an outbound firewall rule (in a FW outside the cluster) has been implemented correctly. Therefore I would like to test a command on each potential node in the namespace which will tell me if I can reach my target from the given node. (using curl for such test but that is besides the point for my question)
I can create a small containerized app which will exit 0 on success. Then next step would be execute this on each potential node and harvest the result. How to do that?
(I don't have access to the nodes directly, only indirectly via Kubernetes/OpenShift. I only have access to the namespace-level, not the cluster-level.)
The underlying node firewall settings is NOT control by K8s network policies. To test network connectivity in a namespace you only need to run 1 pod in that namespace. To test firewall settings of the node you typically ssh into the node and execute command to test - while this is possible with K8s but that would require the pod to run with root privileged; which not applicable to you as you only has access to a single namespace.
Then next step would be execute this on each potential node and
harvest the result. How to do that?
As gohm'c answer you can not run Command on Nodes unless you have access to Worker nodes. You need to have SSH access to check the firewall on Nodes.
If you are planning to just run container app on specific types of nodes, or on all the Nodes you can follow below answer
You can create the deployment or you can use the Deamon set if want to run on each node.
Deployment could be useful if you are planning to run on specific nodes, you have to use in that case Node selector or Affinity.
Daemon set will deploy and run containers on all existing Nodes. So you can choose accordingly.
I am working on a POC, and i find out some strange behavior after setting up my kubernetes cluster
In fact, i am working on a topology of one master and two minions.
When i tried to make up 2 pods into each minion and expose a service for them, it turned out that when i try to request the service from the master, nothing is returned (any response from 2 pods) and when i try to request the service from a minion, only the pod deployed in that minion respond but the other no.
This can heavily depend on how your cluster is provisioned.
For starters, you need to validate how networking is set up and if it works as kubernetes expects. Said short, if you launch two pods (on separate nodes), they should get IPs from their dedicated per node ranges, and be able to route that between nodes. You can use some small(ish) base image (alpine/debian/ubuntu etc.), with something like sleep 1d , exec into them interactively with bash and simply ping one from the other. If it does not work, your network setup is broken.
Make sure you test between pods, not directly from node host OS. In some configurations node is unable to access service IPs due to routing concerns, but pod-to-pod works fine (seen this in some flannel configurations)
Also, your networking is probably provided by some overlay network solution like flannel, weave, calico etc. so check their respective logs for signs of problems.
I setup a kubernetes cluster with 2 powerful physical servers (32 cores + 64GB memory.) Everything runs very smooth except the bad network performance I observed.
As comparison: I run my service on such physical machine directly (one instance). Have a client machine in the same network subset calling the service. The rps can goes to 10k easily. While when I put the exact same service in kubernetes version 1.1.7, one pod (instance) of the service in launched and expose the service by ExternalIP in service yaml file. With the same client, the rps drops to 4k. Even after I switched to iptable mode of kube-proxy, it doesn't seem help a lot.
When I search around, I saw this document https://www.percona.com/blog/2016/02/05/measuring-docker-cpu-network-overhead/
Seems the docker port-forwarding is the network bottleneck. While other network mode of docker: like --net=host, bridge network, or containers sharing network don't have such performance drop. Wondering whether Kubernetes team already aware of such network performance drop? Since docker containers are launched and managed by Kubernetes. Is there anyway to tune the kubernetest to use other network mode of docker?
You can configure Kubernetes networking in a number of different ways when configuring the cluster, and a few different ways on a per-pod basis. If you want to try verifying whether the docker networking arrangement is the problem, set hostNetwork to true in your pod specification and give it another try (example here). This is the equivalent of the docker --net=host setting.