Access the Kubernetes cluster/node from outside - kubernetes

I am new to kubernetes. I have created a cluster of db of kubernetes with 2 nodes. I can access those kubernetes pods from thin client like dbeaver to check the data. But I can not access those kubernetes nodes externally. I am currently trying to run a thick client which will load the data into cluster on kubernetes.
kubectl describe svc <svc>
I can see cluster-Ip assigned to the service. Type of my service is loadbalancer. I tried to use that but still not connecting. I read about using nodeport but without any IP address how to access that
So what is the best way to connect any node or cluster from outside.
Thank you in advance
Regards

#KrishnaChaurasia is right but I would like to explain it in more detail with the help of the official docs.
I strongly recommend going through the following sources:
NodePort Type Service: Exposes the Service on each Node's IP at a static port (the NodePort). A ClusterIP Service, to which the NodePort Service routes, is automatically created. You'll be able to contact the NodePort Service, from outside the cluster, by requesting <NodeIP>:<NodePort>. Here is an example of the NodePort Service:
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
type: NodePort
selector:
app: MyApp
ports:
# By default and for convenience, the `targetPort` is set to the same value as the `port` field.
- port: 80
targetPort: 80
# Optional field
# By default and for convenience, the Kubernetes control plane will allocate a port from a range (default: 30000-32767)
nodePort: 30007
Accessing services running on the cluster: You have several options for connecting to nodes, pods and services from outside the cluster:
Access services through public IPs.
Use a service with type NodePort or LoadBalancer to make the service reachable outside the cluster. See the services and kubectl expose documentation.
Depending on your cluster environment, this may just expose the service to your corporate network, or it may expose it to the internet. Think about whether the service being exposed is secure. Does it do its own authentication?
Place pods behind services. To access one specific pod from a set of replicas, such as for debugging, place a unique label on the pod and create a new service which selects this label.
In most cases, it should not be necessary for application developer to directly access nodes via their nodeIPs.
A supplement example: Use a Service to Access an Application in a Cluster: This page shows how to create a Kubernetes Service object that external clients can use to access an application running in a cluster.
These will help you to better understand the concepts of different Service Types, how to expose and access them from outside the cluster.

Related

Domain Name mapping to K8 service type of load balancer on GKE

I am in the process of learning Kubernetes and creating a sample application on GKE. I am able to create pods, containers, and services on minikube, however, got stuck when exposing it on the internet using my custom domain like hr.mydomain.com.
My application says file-process is running on port 8080, now I want to expose it to the internet. I tried creating the service of load balancer type on GKE. I get the IP of the load balancer and map it to A record of hr.mydomain.com.
My question is - If this service is restarted, does the service IP changes every time and the service becomes inaccessible?
How do I manage it? What are the best practices when mapping domain names to svc?
File service
apiVersion: v1
kind: Service
metadata:
name: file-process-service
labels:
app: file-process-service
spec:
type: LoadBalancer
ports:
- name: http
port: 80
targetPort: 8080
protocol: TCP
selector:
app: file-process-api
Google Kubrnetes Engine is designed to take as much configuration hassle out of your hands as possible. Even if you restart the service nothing will change in regards to it's availability from the Internet.
Networking (including load balancing) is managed automatically withing the GKE cluster:
...Kubernetes uses Services to provide stable IP addresses for applications running within Pods. By default, Pods do not expose an external IP address, because kube-proxy manages all traffic on each node. Pods and their containers can communicate freely, but connections outside the cluster cannot access the Service. For instance, in the previous illustration, clients outside the cluster cannot access the frontend Service using its ClusterIP.
This means that if you expose the service and it will have external IP it will stay the same until the load balancer is deleted:
The network load balancer is aware of all nodes in your cluster and configures your VPC network's firewall rules to allow connections to the Service from outside the VPC network, using the Service's external IP address. You can assign a static external IP address to the Service.
At this point when you have a load balancer with static public IP in front of your service you can set this IP as an A record for your domain.

GKE 1 load balancer with multiple apps on different assigned ports

I want to be able to deploy several, single pod, apps and access them on a single IP address leaning on Kubernetes to assign the ports as they are when you use a NodePort service.
Is there a way to use NodePort with a load balancer?
Honestly, NodePort might work by itself, but GKE seems to block direct access to the nodes. There doesn't seem to be firewall controls like on their unmanaged VMs.
Here's a service if we need something to base an answer on. In this case, I want to deploy 10 these services which are different applications, on the same IP, each publicly accessible on a different port, each proxying port 80 of the nginx container.
---
apiVersion: v1
kind: Service
metadata:
name: foo-svc
spec:
selector:
app: nginx
ports:
- name: foo
protocol: TCP
port: 80
type: NodePort
GKE seems to block direct access to the nodes.
GCP allows creating the FW rules that allow incoming traffic either to 'All Instances in the Network' or 'Specified Target Tags/Service Account' in your VPC Network.
Rules are persistent unless the opposite is specified under the organization's policies.
Node's external IP address can be checked at Cloud Console --> Compute Engine --> VM Instances or with kubectl get nodes -o wide.
I run GKE (managed k8s) and can access all my assets externally.
I have opened all the needed ports in my setup. below is the quickest example.
Below you can find my setup:
$ kubectl get nodes -o wide
NAME AGE VERSION INTERNAL-IP EXTERNAL-IP
gke--mnnv 43d v1.14.10-gke.27 10.156.0.11 34.89.x.x
gke--nw9v 43d v1.14.10-gke.27 10.156.0.12 35.246.x.x
kubectl get svc -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) SELECTOR
knp-np NodePort 10.0.11.113 <none> 8180:30008/TCP 8180:30009/TCP app=server-go
$ curl 35.246.x.x:30008/test
Hello from ServerGo. You requested: /test
That is why it looks like a bunch of NodePort type Services would be sufficient (each one serves requests for particular selector)
If for some reason it's not possible to set up the FW rules to allow traffic directly to your Nodes it's possible to configure GCP TCP LoadBalancer.
Cloud Console --> Network Services --> Load Balancing --> Create LB --> TCP Load Balancing.
There you can select your GKE Nodes (or pool of nodes) as a 'Backend' and specify all the needed ports for the 'Frontend'. For the Frontend you can Reserve Static IP right during the configuration and specify 'Port' range as two port numbers separated by a dash (assuming you have multiple ports to be forwarded to your node pool). Additionally, you can create multiple 'Frontends' if needed.
I hope that helps.
Is there a way to use NodePort with a load balancer?
Kubernetes LoadBalancer type service builds on top of NodePort. So internally LoadBalancer uses NodePort meaning when a loadBalancer type service is created it automatically maps to the NodePort. Although it's tricky but possible to create NodePort type service and manually configure the Google provided loadbalancer to point to NodePorts.

ignite CommunicationSpi questions in PAAS environment

My environment is that the ignite client is on kubernetes and the ignite server is running on a normal server.
In such an environment, TCP connections are not allowed from the server to the client.
For this reason, CommunicationSpi(server -> client) cannot be allowed.
What I'm curious about is what issues can occur in situations where Communication Spi is not available?
In this environment, Is there a way to make a CommunicationSpi(server -> client) connection?
In Kubernetes, the service is used to communicate with pods.
The default service type in Kubernetes is ClusterIP
ClusterIP is an internal IP address reachable from inside of the Kubernetes cluster only. The ClusterIP enables the applications running within the pods to access the service.
To expose the pods outside the kubernetes cluster, you will need k8s service of NodePort or LoadBalancer type.
NodePort: Exposes the Service on each Node’s IP at a static port (the NodePort). A ClusterIP Service, to which the NodePort Service routes, is automatically created. You’ll be able to contact the NodePort Service, from outside the cluster, by requesting <NodeIP>:<NodePort> .
Please note that it is needed to have external IP address assigned to one of the nodes in cluster and a Firewall rule that allows ingress traffic to that port. As a result kubeproxy on Kubernetes node (the external IP address is attached to) will proxy that port to the pods selected by the service.
LoadBalancer: Exposes the Service externally using a cloud provider’s load balancer. NodePort and ClusterIP Services, to which the external load balancer routes, are automatically created.
Alternatively it is possible to use Ingress
There is a very good article on acessing Kubernetes Pods from Outside of cluster .
Hope that helps.
Edited on 09-Dec-2019
upon your comment I recall that it's possible to use hostNetwork and hostPort methods.
hostNetwork
The hostNetwork setting applies to the Kubernetes pods. When a pod is configured with hostNetwork: true, the applications running in such a pod can directly see the network interfaces of the host machine where the pod was started. An application that is configured to listen on all network interfaces will in turn be accessible on all network interfaces of the host machine.
Example:
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
hostNetwork: true
containers:
- name: nginx
image: nginx
You can check that the application is running with: curl -v http://kubenode01.example.com
Note that every time the pod is restarted Kubernetes can reschedule the pod onto a different node and so the application will change its IP address. Besides that two applications requiring the same port cannot run on the same node. This can lead to port conflicts when the number of applications running on the cluster grows.
What is the host networking good for? For cases where a direct access to the host networking is required.
hostPort
The hostPort setting applies to the Kubernetes containers. The container port will be exposed to the external network at :, where the hostIP is the IP address of the Kubernetes node where the container is running and the hostPort is the port requested by the user.
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 8086
hostPort: 443
The hostPort feature allows to expose a single container port on the host IP. Using the hostPort to expose an application to the outside of the Kubernetes cluster has the same drawbacks as the hostNetwork approach discussed in the previous section. The host IP can change when the container is restarted, two containers using the same hostPort cannot be scheduled on the same node.
What is the hostPort used for? For example, the nginx based Ingress controller is deployed as a set of containers running on top of Kubernetes. These containers are configured to use hostPorts 80 and 443 to allow the inbound traffic on these ports from the outside of the Kubernetes cluster.
To support such a deployment configuration you would need to dance a lot around a network configuration - setting up K8 Services, Ignite AddressResolver, etc. The Ignite community is already aware of this inconvenience and working on an out-of-the-box solution.
Updated
If you run Ignite thick clients in a K8 environment and the servers are on VMs, then you need to enable the TcpCommunicationSpi.forceClientToServerConnections mode to avoid connectivity issues.
If you run Ignite thin clients then configure just provide IPs of servers as described here.

How can I expose a StatefulSet service with ClusterIP None on Google Cloud Platform?

How can I expose a StatefulSet service (cassandra, mysql, etc...) with ClusterIP=None on Kubernetes in Google Cloud Platform?
I need to change the ClusterIP config? Or I need to configure Google Cloud NAT? Or I need to change other things?
Thanks
EDIT: I want to connect to cassandra from an external IP, from anyplace on the internet
EDIT2: I guess that the solution is to use LoadBalance instead of ClusterIP, but when I use LoadBalance, the Cassandra nodes can't find the seed node. Then I sill using ClusterIP=None to Cassandra cluster, and I created another POD with type=LoadBalance to connect to Cassandra and to have connections to exterior. And now it's working :)
If by "expose" you mean ability to reach your service endpoints without cluster IP , then just use selector in your headless service, i.e.
apiVersion: v1
kind: Service
metadata:
name: cassandra
spec:
clusterIP: None
selector:
app: cassandra
ports:
- port: 80
targetPort: 80
For more details refer to documentation
Otherwise, if you want to expose your deployments outside of the cluster, you won't be able to do it with headless service.
ClusterIP services are not exposed outside of the Kubernetes cluster. Perhaps you mean to use a NodePort or LoadBalancer service instead?
If you want to expose the service externally, you will need a service that is ClusterIP backed whether that be a NodePort or LoadBalancer; even if you use ingress, you will need to back it up with a ClusterIP service at the very least.
The ClusterIP is only internal and provides the Kubebernetes cluster a fixed endpoint to reference your deployment/pod internally. The simplest method to expose your services is to use a NodePort, in which case your service will take on the IP of the node externally with a high port number (30000+). On GCP, if you define a load-balancer, you will be given an external IP, and the traffic will be forwarded in order to your pods in the stateful sets. If you use an ingress, your external IP will be that of your ingress, and the packet forwarding to your services will be done based on the request URL (ie. you can have multiple FQDNs mapped to a single external IP in your DNS).
"Headless" services are mainly used to decouple your design from Kubernetes. The assumption is that you will be doing your own service discovery, and I don't believe that is a good use case for your application.
Hope this helps!

What is an 'endpoint' in Kubernetes?

I am new to Kubernetes and started reading through the documentation.
There often the term 'endpoint' is used but the documentation lacks an explicit definition.
What is an 'endpoint' in terms of Kubernetes? Where is it located?
I could image the 'endpoint' is some kind of access point for an individual 'node' but that's just a guess.
While you're correct that in the glossary there's indeed no entry for endpoint, it is a well defined Kubernetes network concept or abstraction. Since it's of secondary nature, you'd usually not directly manipulate it. There's a core resource Endpoint defined and it's also supported on the command line:
$ kubectl get endpoints
NAME ENDPOINTS AGE
kubernetes 192.168.64.13:8443 10d
And there you see what it effectively is: an IP address and a port. Usually, you'd let a service manage endpoints (one EP per pod the service routes traffic to) but you can also manually manage them if you have a use case that requires it.
Pods expose themselves through endpoints to a service.
It is if you will part of a pod.
Source: Services and Endpoints
An endpoint is a resource that gets the IP addresses of one or more pods dynamically assigned to it, along with a port. An endpoint can be viewed using kubectl get endpoints.
An endpoint resource is referenced by a kubernetes service, so that the service has a record of the internal IPs of pods in order to be able to communicate with them.
We need endpoints as an abstraction layer because the 'service' in kubernetes acts as part of the orchestration to ensure distribution of traffic to pods (including only sending traffic to healthy pods). For example if a pod dies, a replacement pod will be generated, with a new IP address. Conceptually, the dead pod IP will be removed from the endpoint object, and the IP of the newly created pod will be added, so that the service is updated and 'knows' which pods to connect to.
Read 'Exposing pods to the cluster', then 'Creating a Service' here - https://kubernetes.io/docs/concepts/services-networking/connect-applications-service/#exposing-pods-to-the-cluster
An easy way to investigate and see the relationship is:
kubectl describe pods - and observe the IP addresses of your pods
kubectl get ep - and observe the IP addresses assigned to your endpoint
kubectl describe service myServiceName - and observe the Endpoints associated with your service
So no, the endpoint isn't anything to do with the IP of an individual node. I find it useful to understand the overall structure of kubernetes and the relationship between the cluster, nodes, services, endpoints and pods. This diagram summarises it nicely, and shows an ingress flow that results in the OSI layer 4 (the TCP layer) reaching a back end Node 1, with the OSI layer 7 (http layer) ingress ultimately reaching 'Web Container 1' in Pod 1:
I'll answer your questions one by one:
What is an 'endpoint' in terms of Kubernetes?
(The resource name in K8S is Endpoints).
Endpoints is an object in kubernetes which represents a… List of Endpoints.
Those endpoints can be:
An internal pod running inside the cluster - this is the form that is more familiar.
It is created automatically behind the scenes for us when we create service and pods and match the service label selector to the pods labels.
An external IP which is not a pod - this is the least known option.
The external IP can reside outside the cluster - for example external web server or database.
It can also reside in a different namespace - if you want to point your Service to a Service in a different Namespace inside your cluster.
Regarding external Endpoints - If you do not specify a label selector in your service - Kubernetes can’t create the list of endpoints because he doesn’t know which pods should be included and proxied by the service.
Where is it located?
Like the great diagrams provided here are shown - it sits between the service and an internal (pods) or external (web server, databases etc’)
resources.
I could image the 'endpoint' is some kind of access point for an
individual 'node' Its an access point to a resource that sits inside
one of the nodes in your cluster.
An Endpoint can reside inside one of the nodes in your cluster, or outside your cluster / environment.
If its an internal endpoint (which means that the pod label matches a service label selector) - you can reach it with:
$kubectl describe svc/my-service
Name: my-service
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":" my-service","namespace":"...
Selector: run=some-run
Type: NodePort
IP: 10.100.92.162
Port: <unset> 8080/TCP
TargetPort: 80/TCP
NodePort: <unset> 31300/TCP
Endpoints: 172.21.21.2:80,172.21.38.56:80,172.21.39.160:80
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
Or directly with:
$kubectl get endpoints my-service
NAME ENDPOINTS AGE
my-service 172.21.21.2:80,172.21.38.56:80,172.21.39.160:80 63d
Regarding external Enpoints:
You create a service without a label selector:
apiVersion: v1
kind: Service
metadata:
name: my-service #<------ Should match the name of Endpoints object
spec:
ports:
- protocol: TCP
port: 8080
targetPort: 9376
So the corresponding Endpoint object will not be created automatically and you manually add the Endpoints object and map the Service to the desired network address and port where the external resource is running:
apiVersion: v1
kind: Endpoints
metadata:
name: my-service #<------ Should match the name of Service
subsets:
- addresses:
- ip: 192.0.2.45
ports:
- port: 9376
(Notice:) I used the term internal for the auto-generated Endpoints for pods that has a match with the label selector and the term external for Endpoints that were created manually.
I could used the terms auto-generated and manual instead - that would have been more accurate but I think more confusing also.
In most cases, when the Endpoints are related to pods inside our cluster - we would want them to be also managed by K8S - in this case they will also need to be generated by K8S.
Think of Endpoints as 'final destination to reach an app' or 'smth at the very end'
As you can see in bellow example: pod-IP = 10.32.0.2, service-Port* = 3306, endpoint = [pod-IP]:[service-Port]
Therefore for User Bob to reach the MySql application it should address to 10.32.0.2:3306 that is the last node in the network where he can find his required information.
An simplistic example: I want to access Google Mail in this case for me/browser the endpoint will be gmail.com:443 similar with above example [pod-IP]:[service-Port]
Endpoints track the IP Addresses of the objects the service send traffic to.
When a service selector matches a pod label, that IP Address is added to your endpoints.
Source: https://theithollow.com/2019/02/04/kubernetes-endpoints/
In k8s, Endpoints is consisted of a distributed API like "[IP]:[Port]" and other things. However,in SOAP,Endpoint is a distributed API like URL.
from Google Cloud:
Endpoints is a distributed API management system. It provides an API console, hosting, logging, monitoring, and other features to help you create, share, maintain, and secure your APIs.