Precondition: the kubernetes cluster have 1 master and 2 worker. The cluster uses one CIDR for all nodes.
Question: how to configure network to pod on worker1 can communicate with pod on worker2?
Kubernetes has its own service discovery and you can use define service for communicate. If you want to communicate or send request to worker2 then you have to define a service for worker2. Suppose you have a worker add-service and you want to communicate with it, then you have to define a service for add-service worker like below
apiVersion: v1
kind: Service
metadata:
name: add-service
spec:
selector:
app: add
ports:
- port: 3000
targetPort: add-service
Then from worker1 you can user add-service to communicate and kuberntes will use service discovery to find the exact worker. Here is a hackernoon detail article about how to create pod, deployment, service and communicate with between them.
A kubernetes cluster consists of one or more nodes. A node is a host system, whether physical or virtual, with a container runtime and its dependencies (i.e. docker mostly) and several kubernetes system components, that is connected to a network that allows it to reach other nodes in the cluster. A simple cluster of two nodes might look like this:
You can find more answers here
When the cluster use one CIDR for all nodes, the pod will be assigned ip address from one subnet.
Related
I am a bit new to Kubernetes and I am working with EKS.
I have two main apps for which there is a number of pods and I have set up a ELB for external access.
I also have a small app with say 1-2 pods. I don't want to set up a ELB just for this small app. I checked the node port, but in that case, I can't use the default HTTPS port 443.
So I feel the best thing to do in this case would be to bring the small app outside the cluster, then maybe set it up in a EC2 instance. Or is there some other way to expose the small app while keeping it inside the cluster itself?
You can try to use the Host network (Node) like hostport (Not recommended in k8s to use in prod)
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
hostPort: 443
The hostPort feature allows to expose a single container port on the
host IP. Using the hostPort to expose an application to the outside of
the Kubernetes cluster has the same drawbacks as the hostNetwork
approach discussed in the previous section. The host IP can change
when the container is restarted, two containers using the same
hostPort cannot be scheduled on the same node and the usage of the
hostPort is considered a privileged operation on OpenShift.
Extra
I don't want to set up a elb just for this small app.
Ideally, you have to use the deployments with the ingress and ingress controller. So there will be single ELB for the whole EKS cluster and all services will be using that single point.
All PODs or deployment will be running into a single cluster if you want. Single point ingress will work as handling the traffic into EKS cluster.
https://kubernetes.io/docs/concepts/services-networking/ingress/
You can read this article how to setup the ingress in EKS aws so you will get an idea.
You can use a different domains for exposing services.
Example :
https://aws.amazon.com/premiumsupport/knowledge-center/terminate-https-traffic-eks-acm/
I have a very hard time understanding what kubernetes network architecture is really like.
As a basic understanding "there's a machine behind each IP", but with this stuff of containers inside pods inside nodes inside a cluster hosted somewhere.
Adding services, deployments and other kubernetes objects, makes it even more confusing. The documentation is not super clear on that. I'm just lost and throwing hands in the air
Could I ask for a brief explanation of what network is inside what network, and what elements have IPs and/or ports?
"there's a machine behind each IP"
i am not sure about for which IP you are talking about
There are multiple components in Kubernetes if we focus main
POD (It runs docker container)
Deployment
Service
Ingress
Now if talk about managing the traffic it's work like
Ingress > ingress controller > Service > deployment > POD > Container
There are IPs assigned to each PODs (workloads)
But it's not useful in normal case, it auto managed by K8s nothing to do it with it.
it will be internal IP so you can not connect with workload of POD from out of Kubernetes.
Now we have Type of Services
ClusterIP
Load Balancer
Node Port
Cluster IP is the same again internal IP managed by Kubernetes.
The load balancer is exposed to the internet it's like you are attaching the LB to your workload or application so it will be exposed to the internet.
In this case, you will get the external IP open to the internet.
This was like intern arch.
If we talk about simple cluster architecture
There are master node and work nodes
Work nodes have internal and external IP based on you Private Kubernetes cluster or Public Kubernetes cluster.
Each of you container or POD runs on worker node and have internal IP in ideal scenario.
Multiple workloads or containers can run on a single Machine or single VM NODE.
Ports get used the same way we use generally.
For example this is my test service :
apiVersion: v1
kind: Service
metadata:
name: test
labels:
app: test
spec:
ports:
- name: http
port: 80
targetPort: 9595
- name: https
port: 9595
targetPort: 9595
selector:
app: test
tier: frontend
it's has exposed two port 80 and 9595. if you look carefully targetPort: 9595 there is a target port in both cases it is diverting traffic to the 9595 port on which my container or workload will be running.
I am new to kubernetes. I have created a cluster of db of kubernetes with 2 nodes. I can access those kubernetes pods from thin client like dbeaver to check the data. But I can not access those kubernetes nodes externally. I am currently trying to run a thick client which will load the data into cluster on kubernetes.
kubectl describe svc <svc>
I can see cluster-Ip assigned to the service. Type of my service is loadbalancer. I tried to use that but still not connecting. I read about using nodeport but without any IP address how to access that
So what is the best way to connect any node or cluster from outside.
Thank you in advance
Regards
#KrishnaChaurasia is right but I would like to explain it in more detail with the help of the official docs.
I strongly recommend going through the following sources:
NodePort Type Service: Exposes the Service on each Node's IP at a static port (the NodePort). A ClusterIP Service, to which the NodePort Service routes, is automatically created. You'll be able to contact the NodePort Service, from outside the cluster, by requesting <NodeIP>:<NodePort>. Here is an example of the NodePort Service:
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
type: NodePort
selector:
app: MyApp
ports:
# By default and for convenience, the `targetPort` is set to the same value as the `port` field.
- port: 80
targetPort: 80
# Optional field
# By default and for convenience, the Kubernetes control plane will allocate a port from a range (default: 30000-32767)
nodePort: 30007
Accessing services running on the cluster: You have several options for connecting to nodes, pods and services from outside the cluster:
Access services through public IPs.
Use a service with type NodePort or LoadBalancer to make the service reachable outside the cluster. See the services and kubectl expose documentation.
Depending on your cluster environment, this may just expose the service to your corporate network, or it may expose it to the internet. Think about whether the service being exposed is secure. Does it do its own authentication?
Place pods behind services. To access one specific pod from a set of replicas, such as for debugging, place a unique label on the pod and create a new service which selects this label.
In most cases, it should not be necessary for application developer to directly access nodes via their nodeIPs.
A supplement example: Use a Service to Access an Application in a Cluster: This page shows how to create a Kubernetes Service object that external clients can use to access an application running in a cluster.
These will help you to better understand the concepts of different Service Types, how to expose and access them from outside the cluster.
currently I have 3 virtual machines (1 master kubernetes node and 2 slaves).
I want to create a service which encapsulates 3 replicas of my container.
I am curious if by default, in this local environment, when creating the service, kubernetes offers a load balancer by default, even though it was NOT specified in the service yaml file. Does it offer round robin by default ?
If your not on a supported cloud provider, your pretty much stuck with NodePort or ClusterIP for service types. A project I used when I was experimenting with a local kubernetes environment was Metallb. Metallb allows you to use the LoadBalancer service type and expose your service outside of the cluster network when running kubernetes outside a hosted platform, i.e., local test cluster.
To use Metallb, you must provide a pool of ip addresses the you can use on your pod network.
First create a config map with your pod network ip range --
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 192.168.1.240-192.168.1.250
Then add that config map to your cluster.
kubectl apply -f metallb-config.yaml
Finally add the metallb controller to your cluster
kubectl apply -f https://raw.githubusercontent.com/google/metallb/v0.8.3/manifests/metallb.yaml
Now you should be able to expose your service.
kubectl expose deployment name-of-deployment --type=LoadBalancer --name=name-of-service
You can not practically use LoadBalancer service in your local.LoadBalancer service creates a LoadBalancer provided by your cloud provider if you are running on public cloud. You can set L7 load balancing capabilities via your cloud provide offering. Load balancing in L4 layer will be controller by kube-proxy which is round robin by default.
If you are using cluster ip or NodePort then also you are getting the L4 load balancing offered by kube proxy.
You won't be able to do that because locally there is no cloud controller available.
When you will be in cloud and you created service with LoadBalancer kubernetes controller will talk to cloud controller and it will create loadbalancer in cluster. But in this case there is no cloud-controller available to create loadbalancer.
My environment is that the ignite client is on kubernetes and the ignite server is running on a normal server.
In such an environment, TCP connections are not allowed from the server to the client.
For this reason, CommunicationSpi(server -> client) cannot be allowed.
What I'm curious about is what issues can occur in situations where Communication Spi is not available?
In this environment, Is there a way to make a CommunicationSpi(server -> client) connection?
In Kubernetes, the service is used to communicate with pods.
The default service type in Kubernetes is ClusterIP
ClusterIP is an internal IP address reachable from inside of the Kubernetes cluster only. The ClusterIP enables the applications running within the pods to access the service.
To expose the pods outside the kubernetes cluster, you will need k8s service of NodePort or LoadBalancer type.
NodePort: Exposes the Service on each Node’s IP at a static port (the NodePort). A ClusterIP Service, to which the NodePort Service routes, is automatically created. You’ll be able to contact the NodePort Service, from outside the cluster, by requesting <NodeIP>:<NodePort> .
Please note that it is needed to have external IP address assigned to one of the nodes in cluster and a Firewall rule that allows ingress traffic to that port. As a result kubeproxy on Kubernetes node (the external IP address is attached to) will proxy that port to the pods selected by the service.
LoadBalancer: Exposes the Service externally using a cloud provider’s load balancer. NodePort and ClusterIP Services, to which the external load balancer routes, are automatically created.
Alternatively it is possible to use Ingress
There is a very good article on acessing Kubernetes Pods from Outside of cluster .
Hope that helps.
Edited on 09-Dec-2019
upon your comment I recall that it's possible to use hostNetwork and hostPort methods.
hostNetwork
The hostNetwork setting applies to the Kubernetes pods. When a pod is configured with hostNetwork: true, the applications running in such a pod can directly see the network interfaces of the host machine where the pod was started. An application that is configured to listen on all network interfaces will in turn be accessible on all network interfaces of the host machine.
Example:
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
hostNetwork: true
containers:
- name: nginx
image: nginx
You can check that the application is running with: curl -v http://kubenode01.example.com
Note that every time the pod is restarted Kubernetes can reschedule the pod onto a different node and so the application will change its IP address. Besides that two applications requiring the same port cannot run on the same node. This can lead to port conflicts when the number of applications running on the cluster grows.
What is the host networking good for? For cases where a direct access to the host networking is required.
hostPort
The hostPort setting applies to the Kubernetes containers. The container port will be exposed to the external network at :, where the hostIP is the IP address of the Kubernetes node where the container is running and the hostPort is the port requested by the user.
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 8086
hostPort: 443
The hostPort feature allows to expose a single container port on the host IP. Using the hostPort to expose an application to the outside of the Kubernetes cluster has the same drawbacks as the hostNetwork approach discussed in the previous section. The host IP can change when the container is restarted, two containers using the same hostPort cannot be scheduled on the same node.
What is the hostPort used for? For example, the nginx based Ingress controller is deployed as a set of containers running on top of Kubernetes. These containers are configured to use hostPorts 80 and 443 to allow the inbound traffic on these ports from the outside of the Kubernetes cluster.
To support such a deployment configuration you would need to dance a lot around a network configuration - setting up K8 Services, Ignite AddressResolver, etc. The Ignite community is already aware of this inconvenience and working on an out-of-the-box solution.
Updated
If you run Ignite thick clients in a K8 environment and the servers are on VMs, then you need to enable the TcpCommunicationSpi.forceClientToServerConnections mode to avoid connectivity issues.
If you run Ignite thin clients then configure just provide IPs of servers as described here.