what is the network structure like in a cluster? - kubernetes

I have a very hard time understanding what kubernetes network architecture is really like.
As a basic understanding "there's a machine behind each IP", but with this stuff of containers inside pods inside nodes inside a cluster hosted somewhere.
Adding services, deployments and other kubernetes objects, makes it even more confusing. The documentation is not super clear on that. I'm just lost and throwing hands in the air
Could I ask for a brief explanation of what network is inside what network, and what elements have IPs and/or ports?

"there's a machine behind each IP"
i am not sure about for which IP you are talking about
There are multiple components in Kubernetes if we focus main
POD (It runs docker container)
Deployment
Service
Ingress
Now if talk about managing the traffic it's work like
Ingress > ingress controller > Service > deployment > POD > Container
There are IPs assigned to each PODs (workloads)
But it's not useful in normal case, it auto managed by K8s nothing to do it with it.
it will be internal IP so you can not connect with workload of POD from out of Kubernetes.
Now we have Type of Services
ClusterIP
Load Balancer
Node Port
Cluster IP is the same again internal IP managed by Kubernetes.
The load balancer is exposed to the internet it's like you are attaching the LB to your workload or application so it will be exposed to the internet.
In this case, you will get the external IP open to the internet.
This was like intern arch.
If we talk about simple cluster architecture
There are master node and work nodes
Work nodes have internal and external IP based on you Private Kubernetes cluster or Public Kubernetes cluster.
Each of you container or POD runs on worker node and have internal IP in ideal scenario.
Multiple workloads or containers can run on a single Machine or single VM NODE.
Ports get used the same way we use generally.
For example this is my test service :
apiVersion: v1
kind: Service
metadata:
name: test
labels:
app: test
spec:
ports:
- name: http
port: 80
targetPort: 9595
- name: https
port: 9595
targetPort: 9595
selector:
app: test
tier: frontend
it's has exposed two port 80 and 9595. if you look carefully targetPort: 9595 there is a target port in both cases it is diverting traffic to the 9595 port on which my container or workload will be running.

Related

Kubernetes (EKS) Design confusions

I am a bit new to Kubernetes and I am working with EKS.
I have two main apps for which there is a number of pods and I have set up a ELB for external access.
I also have a small app with say 1-2 pods. I don't want to set up a ELB just for this small app. I checked the node port, but in that case, I can't use the default HTTPS port 443.
So I feel the best thing to do in this case would be to bring the small app outside the cluster, then maybe set it up in a EC2 instance. Or is there some other way to expose the small app while keeping it inside the cluster itself?
You can try to use the Host network (Node) like hostport (Not recommended in k8s to use in prod)
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
hostPort: 443
The hostPort feature allows to expose a single container port on the
host IP. Using the hostPort to expose an application to the outside of
the Kubernetes cluster has the same drawbacks as the hostNetwork
approach discussed in the previous section. The host IP can change
when the container is restarted, two containers using the same
hostPort cannot be scheduled on the same node and the usage of the
hostPort is considered a privileged operation on OpenShift.
Extra
I don't want to set up a elb just for this small app.
Ideally, you have to use the deployments with the ingress and ingress controller. So there will be single ELB for the whole EKS cluster and all services will be using that single point.
All PODs or deployment will be running into a single cluster if you want. Single point ingress will work as handling the traffic into EKS cluster.
https://kubernetes.io/docs/concepts/services-networking/ingress/
You can read this article how to setup the ingress in EKS aws so you will get an idea.
You can use a different domains for exposing services.
Example :
https://aws.amazon.com/premiumsupport/knowledge-center/terminate-https-traffic-eks-acm/

Domain Name mapping to K8 service type of load balancer on GKE

I am in the process of learning Kubernetes and creating a sample application on GKE. I am able to create pods, containers, and services on minikube, however, got stuck when exposing it on the internet using my custom domain like hr.mydomain.com.
My application says file-process is running on port 8080, now I want to expose it to the internet. I tried creating the service of load balancer type on GKE. I get the IP of the load balancer and map it to A record of hr.mydomain.com.
My question is - If this service is restarted, does the service IP changes every time and the service becomes inaccessible?
How do I manage it? What are the best practices when mapping domain names to svc?
File service
apiVersion: v1
kind: Service
metadata:
name: file-process-service
labels:
app: file-process-service
spec:
type: LoadBalancer
ports:
- name: http
port: 80
targetPort: 8080
protocol: TCP
selector:
app: file-process-api
Google Kubrnetes Engine is designed to take as much configuration hassle out of your hands as possible. Even if you restart the service nothing will change in regards to it's availability from the Internet.
Networking (including load balancing) is managed automatically withing the GKE cluster:
...Kubernetes uses Services to provide stable IP addresses for applications running within Pods. By default, Pods do not expose an external IP address, because kube-proxy manages all traffic on each node. Pods and their containers can communicate freely, but connections outside the cluster cannot access the Service. For instance, in the previous illustration, clients outside the cluster cannot access the frontend Service using its ClusterIP.
This means that if you expose the service and it will have external IP it will stay the same until the load balancer is deleted:
The network load balancer is aware of all nodes in your cluster and configures your VPC network's firewall rules to allow connections to the Service from outside the VPC network, using the Service's external IP address. You can assign a static external IP address to the Service.
At this point when you have a load balancer with static public IP in front of your service you can set this IP as an A record for your domain.

Access the Kubernetes cluster/node from outside

I am new to kubernetes. I have created a cluster of db of kubernetes with 2 nodes. I can access those kubernetes pods from thin client like dbeaver to check the data. But I can not access those kubernetes nodes externally. I am currently trying to run a thick client which will load the data into cluster on kubernetes.
kubectl describe svc <svc>
I can see cluster-Ip assigned to the service. Type of my service is loadbalancer. I tried to use that but still not connecting. I read about using nodeport but without any IP address how to access that
So what is the best way to connect any node or cluster from outside.
Thank you in advance
Regards
#KrishnaChaurasia is right but I would like to explain it in more detail with the help of the official docs.
I strongly recommend going through the following sources:
NodePort Type Service: Exposes the Service on each Node's IP at a static port (the NodePort). A ClusterIP Service, to which the NodePort Service routes, is automatically created. You'll be able to contact the NodePort Service, from outside the cluster, by requesting <NodeIP>:<NodePort>. Here is an example of the NodePort Service:
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
type: NodePort
selector:
app: MyApp
ports:
# By default and for convenience, the `targetPort` is set to the same value as the `port` field.
- port: 80
targetPort: 80
# Optional field
# By default and for convenience, the Kubernetes control plane will allocate a port from a range (default: 30000-32767)
nodePort: 30007
Accessing services running on the cluster: You have several options for connecting to nodes, pods and services from outside the cluster:
Access services through public IPs.
Use a service with type NodePort or LoadBalancer to make the service reachable outside the cluster. See the services and kubectl expose documentation.
Depending on your cluster environment, this may just expose the service to your corporate network, or it may expose it to the internet. Think about whether the service being exposed is secure. Does it do its own authentication?
Place pods behind services. To access one specific pod from a set of replicas, such as for debugging, place a unique label on the pod and create a new service which selects this label.
In most cases, it should not be necessary for application developer to directly access nodes via their nodeIPs.
A supplement example: Use a Service to Access an Application in a Cluster: This page shows how to create a Kubernetes Service object that external clients can use to access an application running in a cluster.
These will help you to better understand the concepts of different Service Types, how to expose and access them from outside the cluster.

ignite CommunicationSpi questions in PAAS environment

My environment is that the ignite client is on kubernetes and the ignite server is running on a normal server.
In such an environment, TCP connections are not allowed from the server to the client.
For this reason, CommunicationSpi(server -> client) cannot be allowed.
What I'm curious about is what issues can occur in situations where Communication Spi is not available?
In this environment, Is there a way to make a CommunicationSpi(server -> client) connection?
In Kubernetes, the service is used to communicate with pods.
The default service type in Kubernetes is ClusterIP
ClusterIP is an internal IP address reachable from inside of the Kubernetes cluster only. The ClusterIP enables the applications running within the pods to access the service.
To expose the pods outside the kubernetes cluster, you will need k8s service of NodePort or LoadBalancer type.
NodePort: Exposes the Service on each Node’s IP at a static port (the NodePort). A ClusterIP Service, to which the NodePort Service routes, is automatically created. You’ll be able to contact the NodePort Service, from outside the cluster, by requesting <NodeIP>:<NodePort> .
Please note that it is needed to have external IP address assigned to one of the nodes in cluster and a Firewall rule that allows ingress traffic to that port. As a result kubeproxy on Kubernetes node (the external IP address is attached to) will proxy that port to the pods selected by the service.
LoadBalancer: Exposes the Service externally using a cloud provider’s load balancer. NodePort and ClusterIP Services, to which the external load balancer routes, are automatically created.
Alternatively it is possible to use Ingress
There is a very good article on acessing Kubernetes Pods from Outside of cluster .
Hope that helps.
Edited on 09-Dec-2019
upon your comment I recall that it's possible to use hostNetwork and hostPort methods.
hostNetwork
The hostNetwork setting applies to the Kubernetes pods. When a pod is configured with hostNetwork: true, the applications running in such a pod can directly see the network interfaces of the host machine where the pod was started. An application that is configured to listen on all network interfaces will in turn be accessible on all network interfaces of the host machine.
Example:
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
hostNetwork: true
containers:
- name: nginx
image: nginx
You can check that the application is running with: curl -v http://kubenode01.example.com
Note that every time the pod is restarted Kubernetes can reschedule the pod onto a different node and so the application will change its IP address. Besides that two applications requiring the same port cannot run on the same node. This can lead to port conflicts when the number of applications running on the cluster grows.
What is the host networking good for? For cases where a direct access to the host networking is required.
hostPort
The hostPort setting applies to the Kubernetes containers. The container port will be exposed to the external network at :, where the hostIP is the IP address of the Kubernetes node where the container is running and the hostPort is the port requested by the user.
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 8086
hostPort: 443
The hostPort feature allows to expose a single container port on the host IP. Using the hostPort to expose an application to the outside of the Kubernetes cluster has the same drawbacks as the hostNetwork approach discussed in the previous section. The host IP can change when the container is restarted, two containers using the same hostPort cannot be scheduled on the same node.
What is the hostPort used for? For example, the nginx based Ingress controller is deployed as a set of containers running on top of Kubernetes. These containers are configured to use hostPorts 80 and 443 to allow the inbound traffic on these ports from the outside of the Kubernetes cluster.
To support such a deployment configuration you would need to dance a lot around a network configuration - setting up K8 Services, Ignite AddressResolver, etc. The Ignite community is already aware of this inconvenience and working on an out-of-the-box solution.
Updated
If you run Ignite thick clients in a K8 environment and the servers are on VMs, then you need to enable the TcpCommunicationSpi.forceClientToServerConnections mode to avoid connectivity issues.
If you run Ignite thin clients then configure just provide IPs of servers as described here.

Kubernetes network config

Precondition: the kubernetes cluster have 1 master and 2 worker. The cluster uses one CIDR for all nodes.
Question: how to configure network to pod on worker1 can communicate with pod on worker2?
Kubernetes has its own service discovery and you can use define service for communicate. If you want to communicate or send request to worker2 then you have to define a service for worker2. Suppose you have a worker add-service and you want to communicate with it, then you have to define a service for add-service worker like below
apiVersion: v1
kind: Service
metadata:
name: add-service
spec:
selector:
app: add
ports:
- port: 3000
targetPort: add-service
Then from worker1 you can user add-service to communicate and kuberntes will use service discovery to find the exact worker. Here is a hackernoon detail article about how to create pod, deployment, service and communicate with between them.
A kubernetes cluster consists of one or more nodes. A node is a host system, whether physical or virtual, with a container runtime and its dependencies (i.e. docker mostly) and several kubernetes system components, that is connected to a network that allows it to reach other nodes in the cluster. A simple cluster of two nodes might look like this:
You can find more answers here
When the cluster use one CIDR for all nodes, the pod will be assigned ip address from one subnet.