How to define a static ClusterIP in Kubernetes? - kubernetes

I know we can set Public-IP as a static if we define LoadBalancer but can we set a static Cluster IP for service?
Example:
**NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE**
service/my-application-service ClusterIP 10.111.67.245 <none> 80/TCP 11d

It looks like you can specify the clusterIP field under spec on a ClusterIP kind service.
Example:
apiVersion: v1
kind: Service
metadata:
name: myawesomeservice
namespace: myawesomenamespace
spec:
clusterIP: 10.43.11.51
...
Most relevant snippet from docs
"If an address is specified manually, is in-range (as per system configuration), and is not in use, it will be allocated to the service; otherwise creation of the service will fail" - https://kubernetes.io/docs/reference/kubernetes-api/services-resources/service-v1/
And here is the full paragraph.
spec
clusterIP (string)
clusterIP is the IP address of the service and is usually assigned
randomly. If an address is specified manually, is in-range (as per
system configuration), and is not in use, it will be allocated to the
service; otherwise creation of the service will fail. This field may
not be changed through updates unless the type field is also being
changed to ExternalName (which requires this field to be blank) or the
type field is being changed from ExternalName (in which case this
field may optionally be specified, as describe above). Valid values
are "None", empty string (""), or a valid IP address. Setting this to
"None" makes a "headless service" (no virtual IP), which is useful
when direct endpoint connections are preferred and proxying is not
required. Only applies to types ClusterIP, NodePort, and LoadBalancer.
If this field is specified when creating a Service of type
ExternalName, creation will fail. This field will be wiped when
updating a Service to type ExternalName. More info:
https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies
src: https://kubernetes.io/docs/reference/kubernetes-api/services-resources/service-v1/

Related

How to change Nginx Ingress port number?

I have a K8S service (app-filestash-testing) running like following:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
app-filestash-testing ClusterIP 10.111.128.18 <none> 10000/TCP 18h
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 20h
I used the following yaml file to create an Ingress trying reach this service:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: app-filestash-testing
spec:
rules:
- host: www.masternode.com
http:
paths:
- backend:
serviceName: app-filestash-testing
servicePort: 10000
In the /etc/hosts file, I made this change (I used the worker node public IP):
127.0.0.1 localhost
xx.xxx.xxx.xxx www.masternode.com
However, when I checked the Ingress, I saw that the Ingress port is 80.
NAME CLASS HOSTS ADDRESS PORTS AGE
app-filestash-testing nginx www.masternode.com 80 14h
Currently the service is running and listening on port 10000, but the Ingress port is 80.
I am just wondering is there any method/ setting to change the port number of Ingress to 10000? How to reach this service through Ingress? Is is possible to set the port number in /etc/hosts file?
Thanks.
From: https://kubernetes.io/docs/concepts/services-networking/ingress/#what-is-ingress
An Ingress does not expose arbitrary ports or protocols. Exposing services other than HTTP and HTTPS to the internet typically uses a service of type Service.Type=NodePort or Service.Type=LoadBalancer.
NodePort might be what you are looking for. More information and options are documented here: https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types
In a regular ingress you can't set a specific port, by which the ingress will be reachable.
In some specific circumstances it could theoretically be possible, through adding specific annotations, but I don't believe there is such a thing for nginx-ingress.
It is however entirely possible to have a ingress class that is accessible over a different port.
I'm not familiar enough with nginx-ingress to say how to do it there, but if you were to use ingress-nginx instead, there are settings that change these ports.
Through installing this ingress class with helm for example, you can supply the values controller.service.ports.http which defaults to 80, and/or controller.service.ports.https which defaults to 443.
Very likely there is a way to do this for nginx-ingress as well. You have to consider however if the added complexity is really worth it, when you only want to change the port.

kubernetes LoadBalancer service target port set as random in GCP instead of as configured

This is the simplest config straight from the docs, but when I create the service, kubectl lists the target port as something random. Setting the target port to 1337 in the YAML:
apiVersion: v1
kind: Service
metadata:
name: sails-svc
spec:
selector:
app: sails
ports:
- port: 1337
targetPort: 1337
type: LoadBalancer
And this is what k8s sets up for services:
kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP <X.X.X.X> <none> 443/TCP 23h
sails LoadBalancer <X.X.X.X> <X.X.X.X> 1337:30203/TCP 3m6s
svc-postgres ClusterIP <X.X.X.X> <none> 5432/TCP 3m7s
Why is k8s setting the target port to 30203, when I'm specifying 1337? It does the same thing if I try other port numbers, 80 gets 31887. I've read the docs but disabling those attributes did nothing in GCP. What am I not configuring correctly?
Kubectl get services output includes Port:NodePort:Protocol information.By default and for convenience, the Kubernetes control plane will allocate a port from a range default: 30000-32767(Refer the example in this documentation)
To get the TargetPort information try using
kubectl get service <your service name> --output yaml
This command shows all ports details and stable external IP address under loadBalancer:ingress:
Refer this documentation from more details on creating a service type loadbalancer
Maybe this was tripping me up more that it should have due to some redirects I didn't realize that were happening, but ironing out some things with my internal container and this worked.
Yields:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.3.240.1 <none> 443/TCP 28h
sails LoadBalancer 10.3.253.83 <X.X.X.X> 1337:30766/TCP 9m59s
svc-postgres ClusterIP 10.3.248.7 <none> 5432/TCP 12m
I can curl against the EXTERNAL-IP:1337. The internal target port was what was tripping me up. I thought that meant my pod needed to open up to that port and pod applications were supposed to bind to that port (i.e. 30766), but that's not the case. That port is some internal port mapping to the pod I still don't fully understand yet, but the pod still gets external traffic on port 1337 to the pod's 1337 port. I'd like to understand what's going on there better, as I get more into the k8s Networking section of the docs, or if anyone can enlighten me.

To expose the Ladbalancer with static IP

I understand that we can expose the serive as loadbalancer.
kubectl expose deployment hello-world --type=LoadBalancer --name=my-service
kubectl get services my-service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
my-service LoadBalancer 10.3.245.137 104.198.205.71 8080/TCP 54s
Namespace: default
Labels: app.kubernetes.io/name=load-balancer-example
Annotations: <none>
Selector: app.kubernetes.io/name=load-balancer-example
Type: LoadBalancer
IP: 10.3.245.137
LoadBalancer Ingress: 104.198.205.71
I have created a static IP.
Is it possible to replace the LoadBalancer Ingress with static IP?
tl;dr = yes, but trying to edit the IP in that Service resource won't do what you expect -- it's just reporting the current state of the world to you
Is it possible to replace the LoadBalancer Ingress with static IP?
First, the LoadBalancer is whatever your cloud provider created when kubernetes asked it to create one; you have a lot of annotations (that one is for AWS, but there should be ones for your cloud provider, too) that influence the creation, and it appears EIPs for NLBs is one of them, but I doubt that does what you're asking
Second, the type: LoadBalancer is merely convenience -- it's not required to expose your Service outside of the cluster. It's a replacement for creating a Service of type: NodePort, then creating an external load balancer resource, associating all the Nodes in your cluster with that load balancer, pointing to the NodePort on the Node to get traffic from the outside world into the cluster. If you already have a static IP-ed load balacer, you can update its registration to point to the NodePort allocations for your existing my-service and you'll be back in business

How do I access a service on a kubernetes node from another node on the same cluster?

My service description:
kubernetes describe service app-checklot --namespace=app-test-gl
Name: app-checklot
Namespace: app-test-gl
Labels: app=app-checklot
chart=app-checklot-0.1.0
heritage=Tiller
release=chkl
Annotations: <none>
Selector: app=app-checklot,release=chkl
Type: ClusterIP
IP: 10.99.252.76
Port: https 11080/TCP
TargetPort: 11080/TCP
Endpoints: 85.101.213.102:11080,85.101.213.103:11080
Session Affinity: None
Events: <none>
I am able to access the pods separately using the individual ip's:
http://85.101.213.102:11080/service
http://85.101.213.103:11080/service
Also the service using the IP (this needs to be configured from another node by means of the url):
http://10.99.252.76:11080/service
What I would want is to access the service (app-checklot) using the service name in the url - so that I needn't update the url always . Is this possible? If so, how?
From Documentation:
For example, if you have a Service called "my-service" in a Kubernetes
Namespace called "my-ns", a DNS record for "my-service.my-ns" is
created. Pods which exist in the "my-ns" Namespace should be able to
find it by simply doing a name lookup for "my-service". Pods which
exist in other Namespaces must qualify the name as "my-service.my-ns".
The result of these name lookups is the cluster IP.
Another service, deployed to the same namespace, would be able to call http://app-checklot/service.
Yes, from within the cluster your service should be available at:
http://app-checklot.app-test-gl:11080/service

Does NodePort requestbalance between deployments?

So I am setting up an entire stack on Google Cloud and I have several components that need to talk with each other, so I came up with the following flow:
Ingress -> Apache Service -> Apache Deployment (2 instances) -> App Service -> App Deployment (2 instances)
So the Ingress divides the requests nicely among my 2 Apache instances but the Apache Deployments don't divide it nicely among my 2 App deployments.
The services (Apache and App) are in both cases a NodePort service.
What I am trying to achieve is that the services (Apache and App) loadbalance the requests they receive among their linked deployments, but I don't know if NodePort service can even do that, so I was wondering how I could achieve this.
App service yaml looks like this:
apiVersion: v1
kind: Service
metadata:
name: preprocessor-service
labels:
app: preprocessor
spec:
type: NodePort
selector:
app: preprocessor
ports:
- port: 80
targetPort: 8081
If you are going through the clusterIP and are using the default proxy mode to be iptables, then the NodePort service will do a random approach (Kubernetes 1.1 or later), this is called iptables proxy mode. For earlier Kubernetes 1.0 the default was userspace proxy mode which does round robin. If you want to control this behavior you can use the ipvs proxy mode.
When I say clusterIP I mean the IP address that is only understood by the cluster such as the one below:
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
http-svc NodePort 10.109.87.179 <none> 80:30723/TCP 5d
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 69d
When you specify NodePort it should also be a mesh across all of your cluster nodes. In other words, all the nodes in your cluster will listen on their external IP on that particular port, however, you'll get a response from your application or pod if it happens to run on that particular node. So you can potentially set up an external load balancer that points its backend that specific NodePort and traffic would be forwarded according to a healthcheck on the port.
I'm not sure in your case, is it possible that you are not using the clusterIP?