To expose the Ladbalancer with static IP - kubernetes

I understand that we can expose the serive as loadbalancer.
kubectl expose deployment hello-world --type=LoadBalancer --name=my-service
kubectl get services my-service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
my-service LoadBalancer 10.3.245.137 104.198.205.71 8080/TCP 54s
Namespace: default
Labels: app.kubernetes.io/name=load-balancer-example
Annotations: <none>
Selector: app.kubernetes.io/name=load-balancer-example
Type: LoadBalancer
IP: 10.3.245.137
LoadBalancer Ingress: 104.198.205.71
I have created a static IP.
Is it possible to replace the LoadBalancer Ingress with static IP?

tl;dr = yes, but trying to edit the IP in that Service resource won't do what you expect -- it's just reporting the current state of the world to you
Is it possible to replace the LoadBalancer Ingress with static IP?
First, the LoadBalancer is whatever your cloud provider created when kubernetes asked it to create one; you have a lot of annotations (that one is for AWS, but there should be ones for your cloud provider, too) that influence the creation, and it appears EIPs for NLBs is one of them, but I doubt that does what you're asking
Second, the type: LoadBalancer is merely convenience -- it's not required to expose your Service outside of the cluster. It's a replacement for creating a Service of type: NodePort, then creating an external load balancer resource, associating all the Nodes in your cluster with that load balancer, pointing to the NodePort on the Node to get traffic from the outside world into the cluster. If you already have a static IP-ed load balacer, you can update its registration to point to the NodePort allocations for your existing my-service and you'll be back in business

Related

Kubernetes - Curl a Cluster-IP Service

I'm following this kubernetes tutorial to create a service https://kubernetes.io/docs/concepts/services-networking/connect-applications-service/#creating-a-service
I'm using minikube on my local environment. Everything works fine but I cannot curl my cluster IP. I have an operation timeout:
curl: (7) Failed to connect to 10.105.7.117 port 80: Operation timed out
My kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 5d17h
my-nginx ClusterIP 10.105.7.117 <none> 80/TCP 42h
It seems that I'm having the same issue that this guys here who did not find any answer to his problem: https://github.com/kubernetes/kubernetes/issues/86471
I have tried to do the same on my gcloud console but I have the same result. I can only curl my external IP service.
If I understood well, I'm suppose to be already in my minikube local cluster when I start minikube, so for me I should be able to curl the service like it is mention in the tutorial.
What I'm doing wrong?
Although each Pod has a unique IP address, those IPs are not exposed outside the cluster without a Service. Services allow your applications to receive traffic. Services can be exposed in different ways by specifying a type in the ServiceSpec:
ClusterIP (default) - Exposes the Service on an internal IP in the cluster. This type makes the Service only reachable from within the cluster. That is why you cannot access your service via ClusterIP from outside the cluster.
NodePort - Exposes the Service on the same port of each selected Node in the cluster using NAT. Makes a Service accessible from outside the cluster using <NodeIP>:<NodePort>. Superset of ClusterIP.
kind: Service
apiVersion: v1
metadata:
name: example
namespace: example
spec:
type: NodePort
selector:
app: example
ports:
- protocol: TCP
port: 8080
targetPort: 8080
name: ui
Then execute command:
$ kubectl get svc --namespace=example
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
jenkins-ui NodePort yy.zz.xx.xx <none> 8080:30960/TCP 1d
Get minikube ip to get the nodeIP
$ minikube ip
aa.bb.cc.dd
then you can curl it:
curl http://aa.bb.cc.dd:8080
LoadBalancer - Creates an external load balancer in the current cloud (if supported) and assigns a fixed, external IP to the Service. Superset of NodePort.
kind: Service
apiVersion: v1
metadata:
name: example
spec:
selector:
app: example
ports:
- protocol: "TCP"
port: 8080
targetPort: 8080
type: LoadBalancer
externalIPs:
- <your minikube ip>
then you can curl it:
$ curl http://yourminikubeip:8080/
ExternalName - Exposes the Service using an arbitrary name (specified by externalName in the spec) by returning a CNAME record with the name. No proxy is used. This type requires v1.7 or higher of kube-dns. The service itself is only exposed within the cluster, however, the FQDN external-name is not handled or controlled by the cluster. This is likely a publicly accessible URL so you can curl from anywhere. You'll have to configure your domain in a way that restricts who can access it.
The service type externalName is external to the cluster and really only allows for a CNAME redirect from within your cluster to an external path.
See more: esposing-services-kubernetes.
ClusterIP is only available inside the kubernetes network.
If you want to be able to hit this from outside of the cluster use a LoadBalancer to expose a public IP that you can then access from outside of the cluster
Or..
kubectl port-forward <pod_name> 8080:80
then curl
curl http://localhost:8080
which will route through the port-forward to port 80 of the pod.

K8s even load balancing among pods not happening in load test [duplicate]

So I am setting up an entire stack on Google Cloud and I have several components that need to talk with each other, so I came up with the following flow:
Ingress -> Apache Service -> Apache Deployment (2 instances) -> App Service -> App Deployment (2 instances)
So the Ingress divides the requests nicely among my 2 Apache instances but the Apache Deployments don't divide it nicely among my 2 App deployments.
The services (Apache and App) are in both cases a NodePort service.
What I am trying to achieve is that the services (Apache and App) loadbalance the requests they receive among their linked deployments, but I don't know if NodePort service can even do that, so I was wondering how I could achieve this.
App service yaml looks like this:
apiVersion: v1
kind: Service
metadata:
name: preprocessor-service
labels:
app: preprocessor
spec:
type: NodePort
selector:
app: preprocessor
ports:
- port: 80
targetPort: 8081
If you are going through the clusterIP and are using the default proxy mode to be iptables, then the NodePort service will do a random approach (Kubernetes 1.1 or later), this is called iptables proxy mode. For earlier Kubernetes 1.0 the default was userspace proxy mode which does round robin. If you want to control this behavior you can use the ipvs proxy mode.
When I say clusterIP I mean the IP address that is only understood by the cluster such as the one below:
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
http-svc NodePort 10.109.87.179 <none> 80:30723/TCP 5d
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 69d
When you specify NodePort it should also be a mesh across all of your cluster nodes. In other words, all the nodes in your cluster will listen on their external IP on that particular port, however, you'll get a response from your application or pod if it happens to run on that particular node. So you can potentially set up an external load balancer that points its backend that specific NodePort and traffic would be forwarded according to a healthcheck on the port.
I'm not sure in your case, is it possible that you are not using the clusterIP?

expose private kubernetes cluster with NodePort type service

I have created a VPC-native cluster on GKE, master authorized networks disabled on it.
I think I did all things correctly but I still can't access to the app externally.
Below is my service manifest.
apiVersion: v1
kind: Service
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.16.0 (0c01309)
creationTimestamp: null
labels:
io.kompose.service: app
name: app
spec:
ports:
- name: '3000'
port: 80
targetPort: 3000
protocol: TCP
nodePort: 30382
selector:
io.kompose.service: app
type: NodePort
The app's container port is 3000 and I checked it is working from logs.
I added firewall to open the 30382port in my vpc network too.
I still can't access to the node with the specified nodePort.
Is there anything I am missing?
kubectl get ep:
NAME ENDPOINTS AGE
app 10.20.0.10:3000 6h17m
kubernetes 34.69.50.167:443 29h
kubectl get svc:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
app NodePort 10.24.6.14 <none> 80:30382/TCP 6h25m
kubernetes ClusterIP 10.24.0.1 <none> 443/TCP 29h
In Kubernetes, the service is used to communicate with pods.
To expose the pods outside the kubernetes cluster, you will need k8s service of NodePort type.
The NodePort setting applies to the Kubernetes services. By default Kubernetes services are accessible at the ClusterIP which is an internal IP address reachable from inside of the Kubernetes cluster only. The ClusterIP enables the applications running within the pods to access the service. To make the service accessible from outside of the cluster a user can create a service of type NodePort.
Please note that it is needed to have external IP address assigned to one of the nodes in cluster and a Firewall rule that allows ingress traffic to that port. As a result kubeproxy on Kubernetes node (the external IP address is attached to) will proxy that port to the pods selected by the service.

How to assign Public IP to Kubernetes's Ingress

I have deployed Kong-Ingress-controller using helm
And I have Kubernetes's Cluster v1.10 On centos 7
I am using dedicated Server From OVH Provider
When I create Ingress
cat ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: jenkins
spec:
backend:
serviceName: jenkins
servicePort: 8080
kubectl get ing
NAME HOSTS ADDRESS PORTS AGE
jenkins * 80 3s
kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
jenkins ClusterIP 10.254.104.80 <none> 8080/TCP 1d
Now I Can not access this Ingress from Out Side because I am using OVH Server.
Is there a solution?
OVH is not officially supported by Kubernetes. It was supported then generally you would create a Service jenkins of the type LoadBalancer and that would be your externally facing endpoint with a public IP.
Since it's not supported the next best thing is to create a NodePort service. That will create a service that listens on a specific port on all the Kubernetes nodes and forwards the requests to your Pods (only where they are running). So, in this case, you will have to create an OVH Load Balancer with a public IP and point the backend of that load balancer to the NodePort of the service where your Ingress is listening on.

Does NodePort requestbalance between deployments?

So I am setting up an entire stack on Google Cloud and I have several components that need to talk with each other, so I came up with the following flow:
Ingress -> Apache Service -> Apache Deployment (2 instances) -> App Service -> App Deployment (2 instances)
So the Ingress divides the requests nicely among my 2 Apache instances but the Apache Deployments don't divide it nicely among my 2 App deployments.
The services (Apache and App) are in both cases a NodePort service.
What I am trying to achieve is that the services (Apache and App) loadbalance the requests they receive among their linked deployments, but I don't know if NodePort service can even do that, so I was wondering how I could achieve this.
App service yaml looks like this:
apiVersion: v1
kind: Service
metadata:
name: preprocessor-service
labels:
app: preprocessor
spec:
type: NodePort
selector:
app: preprocessor
ports:
- port: 80
targetPort: 8081
If you are going through the clusterIP and are using the default proxy mode to be iptables, then the NodePort service will do a random approach (Kubernetes 1.1 or later), this is called iptables proxy mode. For earlier Kubernetes 1.0 the default was userspace proxy mode which does round robin. If you want to control this behavior you can use the ipvs proxy mode.
When I say clusterIP I mean the IP address that is only understood by the cluster such as the one below:
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
http-svc NodePort 10.109.87.179 <none> 80:30723/TCP 5d
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 69d
When you specify NodePort it should also be a mesh across all of your cluster nodes. In other words, all the nodes in your cluster will listen on their external IP on that particular port, however, you'll get a response from your application or pod if it happens to run on that particular node. So you can potentially set up an external load balancer that points its backend that specific NodePort and traffic would be forwarded according to a healthcheck on the port.
I'm not sure in your case, is it possible that you are not using the clusterIP?