Kubernetes: How to update namespace without changing external IP address? - kubernetes

How to update namespace without changing External-IP of the Service?
Initially, it was without namespace and it was deployed:
kind: Service
metadata:
name: my-app
labels:
app: my-app
spec:
type: LoadBalancer
ports:
port: 80
protocol: TCP
selector:
app: my-app
It creates External-IP address and it was pointed to the DNS. Now I would like to update the Namespace to keep it more organised. So I have created the namespace.
apiVersion: v1
kind: Namespace
metadata:
name: my-namespace
and I have updated the service with a namespace.
kind: Service
metadata:
name: my-app
namespace: my-namespace
labels:
app: my-app
spec:
type: LoadBalancer
ports:
port: 80
protocol: TCP
selector:
app: my-app
The above file creates another Service in my-namespace and the External-IP is not the same. Is there a way to update the namespace without recreating?
Please let me know if you need any information. Thanks!

some cloud providers allow you to specify external ip of a service with https://kubernetes.io/docs/concepts/services-networking/service/#external-ips if you can ake use of this, you should be able to achieve what you need. This will not be a zero downtime operation though as you'll first need to delete the current service and recreate it under different namespace with externalIP specified.

It seems that what you are after is a static public IP address, that is, one that you have reserved and outlives your cluster. If you had one of these, you would be able to specify it for your LoadBalancer either as #Radek mentions above, or possibly with a provider-specific annotation. Then you would be able to move the IP address between LoadBalancers (same applies for Ingress too).
However it seems you have not yet allocated a static public IP yet. It may be a good time to do this, but as Azure doesn’t appear to allow you to “promote” a dynamic IP to a static IP it won’t directly help you here (*).
So you’re left with creating a new LoadBalancer resource with a new public IP. To aid transition and avoid downtime, you could use an external DNS entry to switch users over from 1st to 2nd LoadBalancer IP addresses, which can be done seamlessless and without downtime if you set things up correctly. However does take a bit of time to transition: only once the DNS TTL period is done, is it safe to delete the first LoadBalancer.
If you don’t have an external DNS, this illustrates why it’s a good idea to set it up.
(*) GCP does allow you to do this, I doubt AWS does.

The best way to address this issue is to introduce nginx ingress controller. Let Externel LB route the calls to ingress controller. just update the ingress rules with correct service mapping.
the advantage is that you can expose any number of apps/service from single external IP via ingress. Without ingress you will have to setup one external LB for each application/service that you want to expose to outside world

Related

What is the purpose of headless service in Kubernetes?

I am naive in Kubernetes world. I was going through a interesting concept called headless service.
I have read it, understand it, and I can create headless service. But I am still not convinced about use cases. Like why do we need it. There are already three types of service clusterIP, NodePort and loadbalancer service with their separate use cases.
Could you please tell me what is exactly which headless service solve and all those other three services could not solve it.
I have read it that headless is mainly used with the application which is stateful like dB based pod for example cassandra, MongoDB etc. But my question is why?
A headless service doesn't provide any sort of proxy or load balancing -- it simply provides a mechanism by which clients can look up the ip address of pods. This means that when they connect to your service, they're connecting directly to the pods; there's no intervening proxy.
Consider a situation in which you have a service that matches three pods; e.g., I have this Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: example
name: example
spec:
replicas: 3
selector:
matchLabels:
app: example
template:
metadata:
labels:
app: example
spec:
containers:
- image: docker.io/traefik/whoami:latest
name: whoami
ports:
- containerPort: 80
name: http
If I'm using a typical ClusterIP type service, like this:
apiVersion: v1
kind: Service
metadata:
labels:
app: example
name: example
spec:
ports:
- name: http
port: 80
targetPort: http
selector:
app: example
Then when I look up the service in a client pod, I will see the ip address of the service proxy:
/ # host example
example.default.svc.cluster.local has address 10.96.114.63
However, when using a headless service, like this:
apiVersion: v1
kind: Service
metadata:
labels:
app: example
name: example-headless
spec:
clusterIP: None
ports:
- name: http
port: 80
targetPort: http
selector:
app: example
I will instead see the addresses of the pods:
/ # host example-headless
example-headless.default.svc.cluster.local has address 10.244.0.25
example-headless.default.svc.cluster.local has address 10.244.0.24
example-headless.default.svc.cluster.local has address 10.244.0.23
By removing the proxy from the equation, clients are aware of the actual pod ips, which may be important for some applications. This also simplifies the path between clients and the service, which may have performance benefits.
Kubernetes Services of type ClusterIP, NodePort and LoadBalancer have one thing in common: They loadbalance between all pods that match the service's selector, so you can talk to all pods via a virtual ip.
That's nice because you can have multiple pods of the same application and all requests will be spread between those, avoiding overloading one pod while others are still idle.
For some applications you might require to talk to the pods directly instead of over an virtual ip. Still you'll want a stable hostname that points to the same pod, even if the pod's ip changes, e.g. because it needs to be rescheduled on a different host.
Use cases are mostly databases where applications should always connect to the same instance for data and session consistency.
A headless service does not provide a virtual IP covering all the endpoints in its endpoint slice. If you query that service DNS, you will get all the endpoint IP addresses back. A regular service on the other hand will have sort of a virtual IP, so clients connecting to it are not aware of the underlying endpoints.
To answer your question, why this is important for a stateful. Usually, members of a statefulset need to be aware of each other. Let's say you want to run a distributed database in something like a cluster formation. The members of this cluster need to have a way to discover each other. This is where the headless service comes into play. Because the individual database pods can get the address of the other database pods by using the headless service.
The headless service also provides a stable network identify for each member of the statefulset. You can read about that here. This is, again, useful as the members of the statefulset need to coordinate with each other in some way. Let's say the pod-0 will always take the initial leader role, and every other member knows it should report itself to pod-0 to form a cluster like configuration.

How to redirect traffic from .svc.cluster.local to .svc.k8s.my-domain.com in Kubernetes?

My cluster has its own domain name k8s.my-domain.com.
Originally when I deploy Dgraph, I met issue that their pods cannot talk to each other by dgraph-service.dgraph-namespace.svc.cluster.local.
If they talk to each other by
dgraph-service.dgraph-namespace
dgraph-service.dgraph-namespace.svc.k8s.my-domain.com
it will work.
I fixed by removing .svc.cluster.local part from Dgraph yaml and created the pull request at https://github.com/dgraph-io/dgraph/pull/7976.
Today, when I deploy Yugabyte, I met the same issue again. I created a ticket at https://github.com/yugabyte/yugabyte-operator/issues/38 and hopefully Yugabyte team can fix it.
However, I am not sure if this approach is good now. I hope I can do something on my side.
Is there a way to redirect from .svc.cluster.local to .svc.k8s.my-domain.com?
Maybe in CoreDNS so that I don't need to change original Dgraph or Yugabyte YAML file? Thanks!
UPDATE 1:
Based on #CodeWizard suggestion, and because I got a warning from my IDE:
So I tried both versions:
apiVersion: v1
kind: Service
metadata:
name: external-service
spec:
type: ExternalName
externalName: k8s.my-domain.com
and
apiVersion: v1
kind: Service
metadata:
name: external-service
spec:
type: ExternalName
externalName: cluster.local
After applying this yaml file.
However, I still got same error deploying Yugabyte to my cluster.
You can use ExternalName in your service.
kind: "Service"
apiVersion: "v1"
metadata:
name: "external-mysql-service"
spec:
type: ExternalName
externalName: example.domain.name
selector: {} # The selector field to leave blank.
Using an External Domain Name
Using external domain names makes it easier to manage an external service because you do not have to worry about the external service’s IP addresses changing.
You can use an ExternalName service to direct traffic to an external service.
Using an external domain name service tells the system that the DNS name in the externalName field (example.domain.name in the previous example) is the location of the resource that backs the service.
When a DNS request is made against the Kubernetes DNS server, it returns the externalName in a CNAME record telling the client to look up the returned name to get the IP address.

Exposing LoadBalancer service in minikube at arbitrary port?

I have a minikube cluster with a running WordPress in one deployment, and MySQL in another. Both of the deployments have corresponding services. The definition for WordPress service looks like this:
apiVersion: v1
kind: Service
metadata:
name: wordpress
spec:
selector:
app: wordpress
ports:
- port: 80
type: LoadBalancer
The service works fine, and minikube service gives me a nice path, with an address of minikube ip and a random high port. The problem is WordPress needs a full URL in the name of the site. I'd rather not change it every single time and have local DNS name for the cluster.
Is there a way to expose the LoadBalancer on an arbitrary port in minikube? I'll be fine with any port, as long as it's port is decided by me, and not minikube itself?
Keep in mind that Minikube is unable to provide real loadbalancer like different cloud providers and it merely simulates it by using simple nodePort Service instead.
You can have full control over the port that is used. First of all you can specify it manually in the nodePort Service specification (remember it should be within the default range: 30000-32767):
If you want a specific port number, you can specify a value in the
nodePort field. The control plane will either allocate you that port
or report that the API transaction failed. This means that you need to
take care of possible port collisions yourself. You also have to use a
valid port number, one that’s inside the range configured for NodePort
use.
Your example may look as follows:
apiVersion: v1
kind: Service
metadata:
name: wordpress
spec:
selector:
app: wordpress
ports:
- port: 80
targetPort: 80
nodePort: 30000
type: NodePort
You can also change this default range by providing your custom value after --service-node-port-range flag when starting your kube-apiserver.
When you use kubernetes cluster set up by kukbeadm tool (Minikube also uses it as a default bootstrapper), you need to edit /etc/kubernetes/manifests/kube-apiserver.yaml file and provide the required flag with your custom port range.

Ingress expose the service with the type clusterIP

Is it possible to expose the service by ingress with the type of ClusterIP?
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: my-service
ports:
- name: my-service-port
port: 4001
targetPort: 4001
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: app-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: my.example.com
http:
paths:
- path: /my-service
backend:
serviceName: my-service
servicePort: 4001
I know the service can be exposed with the type of NodePort, but it may cost one more NAT connection, if someone could show me what's the fastest way to detect internal service from the world of internet in the cloud.
No, clusterIP is only reachable from within the cluster. An Ingress is essentially just a set of layer 7 forwarding rules, it does not handle the layer 4 requirements of exposing the internals of your cluster to the outside world. At least 1 NAT step is required.
For Ingress to work, though, you need to have at least one service involved that exposes your workload externally, so nodePort or loadBalancer. Your ingress controller and the infrastructure of your cluster will determine which of the two services you will need to use.
In the case of Nginx ingress, you need to have a single LoadBalancer service which the ingress will use to bridge traffic from outside the cluster to inside it. After that, you can use clusterIP services for each of your workloads.
In your above example, as long as the nginx ingress controller is correctly configured (with a loadbalancer), then the config you are using should work fine.
In short : YES
Now to the elaborate answer...
First thing first, let's have a look at what the official documentation says :
Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster.
[...]
An Ingress controller is responsible for fulfilling the Ingress, usually with a load balancer...
What's confusing here is the term Load balancer. In the definition above, we are talking about the classic and well known in the web load balancer.
This one has nothing to do with kubernetes !
So back to the definition, to use an Ingress and make it work, we need a kubernetes resource called IngressController. And this resource happen to be a load balancer ! That's it.
However, you have to keep in mind that there is a difference between a load balancer in the outside world and a kubernetes service of type type:LoadBalancer.
So in summary (and in order to redirect the traffic from the outside world to your k8s clusterIp service) :
Do you need a Load balancer to make your kind:Ingress works ? Yes, this is the kind:IngressController kubernetes resource.
Do you need a kubernetes service type:LoadBalancer or type:NodePort to make your kind:Ingress works ? Definitely no ! A service type:ClusterIP works just fine !

Assign ExternalIP of LoadBalancer to Deployment as ENV variable

I have very specific case when my Pod should access to another LoadBalancer service via an ExternalIP.
Is there any way to assign LoadBalancer ExternalIP as an ENV variable to Deployment.yaml?
Thank you in advance!
I don't think this is directly possible in any of the standard templating tools. Part of the problem is that creating a cloud-hosted load balancer is an asynchronous operation, so that external-IP value won't be available until some time after kubectl apply (or the equivalent helm install) has finished.
If you can create the Service in advance then you can hard-code its external IP address or host name into other configuration, but this is intrinsically two steps. (If you're bought into Kubernetes operators, this should be possible with custom code: watch the Service, and once it gets its external address, create a corresponding ConfigMap that holds the address.)
Depending on your specific use case it may also work to just target the LoadBalancer Service within your cluster, the same as any other Service. This won't go out through the cloud provider's load-balancer tier, but it should be indistinguishable otherwise.
I found the way how to do it but #David Maze was perfectly right - there is no straight way how to do it.
So, my solution to add DNS with public and private zones:
apiVersion: v1
kind: Service
metadata:
name: nginx-lb
labels:
app.kubernetes.io/name: nginx-lb
annotations:
external-dns.alpha.kubernetes.io/hostname: mycoolservice.{{ .Values.dns_external_zone }}.
external-dns.alpha.kubernetes.io/zone-type: public,private
external-dns.alpha.kubernetes.io/ttl: "1"
spec:
type: LoadBalancer
ports:
- name: https
port: 443
targetPort: https
- name: http
port: 80
targetPort: http
selector:
app.kubernetes.io/name: nginx