How to have the static ELB endpoint for kubernates deployments - kubernetes

Every time I deploy a new build in kubernates. I am getting different EXTERNAL-IP which in below case is afea383cbf72c11e8924c0a19b12bce4-xxxxx.us-east-1.elb.amazonaws.com
$ kubectl get services -o wide -l appname=${APP_FULLNAME_SYSTEST},stage=${APP_SYSTEST_ENV}
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
test-systest-lb-https LoadBalancer 123.45.xxx.21 afea383cbf72c11e8924c0a19b12bce4-xxxxx.us-east-1.elb.amazonaws.com 443:30316/TCP 9d appname=test-systest,stage=systest
How can I have a static external IP (elb) so that I can link that to route 53. Do I have to include something on my Kubernates deployment yml file.
Additional details: I am using below loadbalancer
spec:
type: LoadBalancer
ports:
- name: http
port: 443
targetPort: 8080
protocol: TCP
selector:
appname: %APP_FULL_NAME%
stage: %APP_ENV%

If you are just doing new builds of a single Deployment then you should check what your pipeline is doing to the Service. You want to do a kubectl apply and a rolling update on the Deployment (provided the strategy is set on the Deployment) without modifying the Service (so not a delete and a create). If you do kubectl get services you should see its age (your output shows 9d so that's all good) and kubectl describe service <service_name> will show any events on it.
I'm guessing do just want an external IP entry you can point to like 'afea383cbf72c11e8924c0a19b12bce4-xxxxx.us-east-1.elb.amazonaws.com' and not a truly static IP. If you do want a true static IP you won't get it like this but you can now try NLB.
If you mean you want multiple Deployments (different microservices) to share a single IP then you could install an ingress controller and expose that with an ELB. Then when you deploy new apps you use an Ingress resource for each to tell the controller to expose them externally. So you can then put all your apps on the same external IP but routed under different paths or subdomains. The nginx ingress controller is a good option.

Related

K8s even load balancing among pods not happening in load test [duplicate]

So I am setting up an entire stack on Google Cloud and I have several components that need to talk with each other, so I came up with the following flow:
Ingress -> Apache Service -> Apache Deployment (2 instances) -> App Service -> App Deployment (2 instances)
So the Ingress divides the requests nicely among my 2 Apache instances but the Apache Deployments don't divide it nicely among my 2 App deployments.
The services (Apache and App) are in both cases a NodePort service.
What I am trying to achieve is that the services (Apache and App) loadbalance the requests they receive among their linked deployments, but I don't know if NodePort service can even do that, so I was wondering how I could achieve this.
App service yaml looks like this:
apiVersion: v1
kind: Service
metadata:
name: preprocessor-service
labels:
app: preprocessor
spec:
type: NodePort
selector:
app: preprocessor
ports:
- port: 80
targetPort: 8081
If you are going through the clusterIP and are using the default proxy mode to be iptables, then the NodePort service will do a random approach (Kubernetes 1.1 or later), this is called iptables proxy mode. For earlier Kubernetes 1.0 the default was userspace proxy mode which does round robin. If you want to control this behavior you can use the ipvs proxy mode.
When I say clusterIP I mean the IP address that is only understood by the cluster such as the one below:
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
http-svc NodePort 10.109.87.179 <none> 80:30723/TCP 5d
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 69d
When you specify NodePort it should also be a mesh across all of your cluster nodes. In other words, all the nodes in your cluster will listen on their external IP on that particular port, however, you'll get a response from your application or pod if it happens to run on that particular node. So you can potentially set up an external load balancer that points its backend that specific NodePort and traffic would be forwarded according to a healthcheck on the port.
I'm not sure in your case, is it possible that you are not using the clusterIP?

Can I guarantee the "kubernetes" Service will retain a consistent ClusterIP following cluster creation even if I attempt to modify or recreate it?

A few of our Pods access the Kubernetes API via the "kubernetes" Service. We're in the process of applying Network Policies which allow access to the K8S API, but the only way we've found to accomplish this is to query for the "kubernetes" Service's ClusterIP, and include it as an ipBlock within an egress rule within the Network Policy.
Specifically, this value:
kubectl get services kubernetes --namespace default -o jsonpath='{.spec.clusterIP}'
Is it possible for the "kubernetes" Service ClusterIP to change to a value other than what it was initialized with during cluster creation? If so, there's a possibility our configuration will break. Our hope is that it's not possible, but we're hunting for official supporting documentation.
The short answer is no.
More details :
You cannot change/edit clusterIP because it's immutable... so kubectl edit will not work for this field.
The service cluster IP can be changed easly by kubectl delete -f svc.yaml, then kubectl apply -f svc.yaml again.
Hence, never ever relies on service IP because services are designed to be referred by DNS :
Use service-name if the communicator is inside the same namespace
Use service-name.service-namespace if the communicator is inside or outside the same namespace.
Use service-name.service-namespace.svc.cluster.local for FQDN.
yes that is possible
if specify clusterIP in your service yaml file(Service.spec.clusterIP), the ip address of your service will not be random and always will be same. service yaml should be like this:
apiVersion: v1
kind: Service
metadata:
name: web
namespace: default
spec:
clusterIP: 10.96.0.100
ports:
- name: https
port: 443
protocol: TCP
targetPort: 80
type: ClusterIP
be careful ip you choose should be unassigned in your cluster.

How to access kubernetes websites via https

I built my own 1 host kubernetes cluster (1 host, 1 node, many namespaces, many pods and services) on a virtual machine, running on a always-on server.
The applications running on the cluster are working fine (basically, a NodeJS backend and HTML frontend).
So far, I have a NodePort Service, which is exposing Port 30000:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
traefik-ingress-service NodePort 10.109.211.16 <none> 443:30000/TCP 147d
So, now I can access the web interface by typing https://<server-alias>:30000 in my browser adress bar.
But I would like to access it without giving the port, by only typing https://<server-alias>.
I know, this can be done with the kubectl port-forwarding command:
kubectl -n kube-system port-forward --address 0.0.0.0 svc/traefik-ingress-service 443:443
This works. But it does not seem to be a very professional thing to do.
Port forwarding also seems to keep disconnecting from time to time. Sometimes, it throws an error and quits, but leaves the process open, which leaves the port open - have to kill the process manually.
So, is there a way to do that access-my-application stuff professionally? How do the cluster provider (AWS, GCP...) do that?
Thank you!
Using Ingress Nginx you can access to you website with the name server:
Step 1: Install Nginx ingress in you cluster you can flow this link
After the installation is completed you will have a new pod
NAME READY STATUS
nginx-ingress-xxxxx 1/1 Running
And a new Service
NAME TYPE CLUSTER-IP EXTERNAL-IP
nginx-ingress LoadBalancer 10.109.x.y a.b.c.d
Step 2 : Create new deployment for you application but be sure that you are using the same name space for nginx ingress svc/pod and you application and you set the svc type to ClusterIP
Step 3: Create Kubernetes Ingress Object
Now you have to create the ingress object
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-ingress
namespace: **Same Name Space**
spec:
rules:
- host: your DNS <server-alias>
http:
paths:
- backend:
serviceName: svc Name
servicePort: svc Port
Now you can access to your website using the .
To create a DNS for free you can use freenom or you can use /etc/hosts
update it with :
server-alias a.b.c.d
Since the Type of your Traefik Ingress Service is NodePort, you get to access to the port provided which will have a value from 30000-32000.
You can also configure it to be of type LoadBalancer and interface with a cloud-based Load Balancer.
Ref: https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/
Here's a very related question: Should I use NodePort in my Traefik deployment on Kubernetes?

How to assign Public IP to Kubernetes's Ingress

I have deployed Kong-Ingress-controller using helm
And I have Kubernetes's Cluster v1.10 On centos 7
I am using dedicated Server From OVH Provider
When I create Ingress
cat ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: jenkins
spec:
backend:
serviceName: jenkins
servicePort: 8080
kubectl get ing
NAME HOSTS ADDRESS PORTS AGE
jenkins * 80 3s
kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
jenkins ClusterIP 10.254.104.80 <none> 8080/TCP 1d
Now I Can not access this Ingress from Out Side because I am using OVH Server.
Is there a solution?
OVH is not officially supported by Kubernetes. It was supported then generally you would create a Service jenkins of the type LoadBalancer and that would be your externally facing endpoint with a public IP.
Since it's not supported the next best thing is to create a NodePort service. That will create a service that listens on a specific port on all the Kubernetes nodes and forwards the requests to your Pods (only where they are running). So, in this case, you will have to create an OVH Load Balancer with a public IP and point the backend of that load balancer to the NodePort of the service where your Ingress is listening on.

Does NodePort requestbalance between deployments?

So I am setting up an entire stack on Google Cloud and I have several components that need to talk with each other, so I came up with the following flow:
Ingress -> Apache Service -> Apache Deployment (2 instances) -> App Service -> App Deployment (2 instances)
So the Ingress divides the requests nicely among my 2 Apache instances but the Apache Deployments don't divide it nicely among my 2 App deployments.
The services (Apache and App) are in both cases a NodePort service.
What I am trying to achieve is that the services (Apache and App) loadbalance the requests they receive among their linked deployments, but I don't know if NodePort service can even do that, so I was wondering how I could achieve this.
App service yaml looks like this:
apiVersion: v1
kind: Service
metadata:
name: preprocessor-service
labels:
app: preprocessor
spec:
type: NodePort
selector:
app: preprocessor
ports:
- port: 80
targetPort: 8081
If you are going through the clusterIP and are using the default proxy mode to be iptables, then the NodePort service will do a random approach (Kubernetes 1.1 or later), this is called iptables proxy mode. For earlier Kubernetes 1.0 the default was userspace proxy mode which does round robin. If you want to control this behavior you can use the ipvs proxy mode.
When I say clusterIP I mean the IP address that is only understood by the cluster such as the one below:
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
http-svc NodePort 10.109.87.179 <none> 80:30723/TCP 5d
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 69d
When you specify NodePort it should also be a mesh across all of your cluster nodes. In other words, all the nodes in your cluster will listen on their external IP on that particular port, however, you'll get a response from your application or pod if it happens to run on that particular node. So you can potentially set up an external load balancer that points its backend that specific NodePort and traffic would be forwarded according to a healthcheck on the port.
I'm not sure in your case, is it possible that you are not using the clusterIP?