How to use an api that is mapped to a service in Kubernetes - kubernetes

I want to access my backend pods using an internal Kubernetes dns name. Instead of using http://somepodip:8080/get I want to use http://backend:8080/get to use my backend.
I am currently running my backend pods and have hooked them up to a service.
kind: Service
apiVersion: v1
metadata:
name: backend
spec:
selector:
app: myapp-backend
ports:
- name: http
protocol: TCP
port: 8080
targetPort: 8080
This does assign my pods to the backend service. But when I try to use a frontend pod with http://backend/get , it does not find the resource.
Am I incorrectly configuring the service?

Your service seems to be ok, the issue could be possibly because your frontend is not server rendered, which means that your browser is trying to lookup for a name backend, in that case you cannot rely on kubernetes service name as your browser does not recognize it as a valid hostname.
If you want to access externally by instead of ip, you want to use names, check how to setup a ingress entry https://kubernetes.io/docs/concepts/services-networking/ingress/#what-is-ingress

Related

configure dns in kubernetes pod

good afternoon. I have a problem with a multi-tenant architecture that I am setting up in kubernetes. it consists of two pods (front and back), the front hits a url that points to the backend and in the back I have the different clients (tenants) that I have.
the config of the nginx of the back is defined in the following way:
server_name ~^(?<account>.+)\-backend.domain.com$;
root /var/www/html/tenant/$account-backend/;
index index.php;
this means that if I want to get to the backend from the frontend it would be with a url like this: tenant1.backend.domain.com
the frontend is exposed with a nodeport type service and a load balancer.
the backend is exposed locally with a ClusterIP type service which is the following:
apiVersion: v1
type: Service
metadata:
name: service-clusterip-app-backend
namespace: app
spec:
type: ClusterIP
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
selector:
app: app-nginx
When I upload the cluster and go to the frontend and make a request, the pod can't resolve the tenant1.backend.domain.com. I've tried configuring some redirection rules through coredns, but I don't understand how it works:
apiVersion: v1
kind: ConfigMap
metadata:
name: coredns-custom
namespace: kube-system
data:
zoomcrm.server: |
tenant1.backend.domain.com {
forward . service-clusterip-app-backend:80
}
basically what i need is for the frontend to know where to go when the url of the request is tenant1.backend.domain.com. i've looked into it but nothing i've done has worked.
First of all, you need to forward the DNS request to the reuqired DNS base on the forward rule. Like this. That means you will forward the resolving request of zone tenant1.backend.domain.com(*.tenant1.backend.domain.com) to the DNS server.
data:
zoomcrm.server: |
tenant1.backend.domain.com {
forward . <customDNSserver>:<dns port>
}
Description:
The forward plugin re-uses already opened sockets to the upstreams. It supports UDP, TCP and DNS-over-TLS and uses in band health checking.
the description of forward plugin of coredns can be found here. You can check this post to build up your dns server in kubernetes for name resolving if you are interested.
In your case, if you only need to add single dns record for resolve inside kubernetes. You can either use hostalias in this document, or use coredns file plugin to add zone in your coredns for name resolving, here is the document.

Fetch Request to a Docker container

In my front end (deployed as an AWS ECS service), I have a fetch request to an AWS Route 53 host name which is directed to a backend ECS service. Now I would like to deploy this infrastructure locally in a Kubernetes Minikube cluster. If the front-end pod and the back-end pod are connected together using a Kubernetes Service, should I replace that fetch's method argument to the DNS name of the back-end pod?
fetch(Route_53_Route)
to
fetch(DNS_name_of_backend_pod)
1- Creating the backend Service object:
he key to connecting a frontend to a backend is the backend Service.
A Service creates a persistent IP address and DNS name entry so that the backend microservice can always be reached.
apiVersion: v1
kind: Service
metadata:
name: hello
spec:
selector:
app: hello
tier: backend
ports:
- protocol: TCP
port: 80
targetPort: http
2- Creating the frontend:
Now that you have your backend, you can create a frontend that connects to the backend.
The frontend connects to the backend worker Pods by using the DNS name given to the backend Service.
The DNS name is "hello", which is the value of the name field in the preceding Service configuration file.

EKS Load Balancer IP Not Found

I'm trying to use a load balancer to expose a service I have running on an EKS pod. My service is defined in a yaml like this:
kind: Service
apiVersion: v1
metadata:
name: mlflow-server
namespace: default
labels:
app: mlflow-server
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: nlb
spec:
externalTrafficPolicy: Local
type: LoadBalancer
selector:
app: mlflow-server
ports:
- name: http
port: 88
targetPort: http
- name: https
port: 443
targetPort: https
This is to define a service for a pod that I have mlflow server running on. When I apply this and access the external IP generated for the service, I get a This site can’t be reached webpage error. Is there something I'm missing with exposing my service as a load balanced service to access the mlflow ui?
For a basic Loadbalancer type service you do not need the annotation service.beta.kubernetes.io/aws-load-balancer-type: nlb this creates the network load balancer. Now if you need it to be an NLB then there might be following problems:
The nlb takes few minutes to come up when you apply the setting. If you check it just after you deploy it it will not be able to accept the traffic. Please do check if the intended network loadbalancer is up in your AWS-EC2console > Loadbalancer tab.
The second problem that is more likely to happen is that the NLB is can be attached with only some instance types only. To check that you can go through the following link.
https://docs.aws.amazon.com/elasticloadbalancing/latest/network/target-group-register-targets.html#register-deregister-targets
So if you actually do not have the need of network loadbalancer remove the annotation as the nlb has an higher charge as well. But, if that is the dire requirement do check with the second option if the instances that you are using on AWS are compatible with Network LoadBalancer.

Kubernetes service with external name curl

Well, I created kubernetes-service.yaml file, now i suppose, that on the port 8081 my backend service will be exposed under the domain of my.backend.com. I would like to check whether its accessible, however I have it available only within a cluster. How do I do that? I dont want to expose service externally, I just want to make curl my.backend.com inside a cluster to check results. Is there any workaround of that?
apiVersion: v1
kind: Service
metadata:
name: backend-service
labels:
app: backend
spec:
type: ExternalName
selector:
app: backend
ports:
- protocol: TCP
port: 8081
targetPort: 8080
externalName: my.backend.com
The service itself is only exposed within the cluster, however, the FQDN my.backend.com is not handled or controlled by the cluster. This is likely a publicly accessible URL so you can curl from anywhere. You'll have to configure your domain in a way that restricts who can access it.
The service type externalName is external to the cluster and really only allows for a CNAME redirect from within your cluster to an external path. I'm sure what you are trying to do, but it's not a change you make at the cluster level.

Session Affinity Settings for multiple Pods exposed by a single service

I have a setup Metallb as LB with Nginx Ingress installed on K8S cluster.
I have read about session affinity and its significance but so far I do not have a clear picture.
How can I create a single service exposing multiple pods of the same application?
After creating the single service entry point, how to map the specific client IP to Pod abstracted by the service?
Is there any blog explaining this concept in terms of how the mapping between Client IP and POD is done in kubernetes?
But I do not see Client's IP in the YAML. Then, How is this service going to map the traffic to respective clients to its endpoints? this is the question I have.
kind: Service
apiVersion: v1
metadata:
name: my-service
spec:
selector:
app: my-app
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
sessionAffinity: ClientIP
sessionAffinityConfig:
clientIP:
timeoutSeconds: 10000
Main concept of Session Affinity is to redirect traffic from one client always to specific node. Please keep in mind that session affinity is a best-effort method and there are scenarios where it will fail due to pod restarts or network errors.
There are two main types of Session Affinity:
1) Based on Client IP
This option works well for scenario where there is only one client per IP. In this method you don't need Ingress/Proxy between K8s services and client.
Client IP should be static, because each time when client will change IP he will be redirected to another pod.
To enable the session affinity in kubernetes, we can add the following to the service definition.
service.spec.sessionAffinity: ClientIP
Because community provided proper manifest to use this method I will not duplicate.
2) Based on Cookies
It works when there are multiple clients from the same IP, because it´s stored at web browser level. This method require Ingress object. Steps to apply this method with more detailed information can be found here under Session affinity based on Cookie section.
Create NGINX controller deployment
Create NGINX service
Create Ingress
Redirect your public DNS name to the NGINX service public/external IP.
About mapping ClientIP and POD, according to Documentation
kube-proxy is responsible for SessionAffinity. One of Kube-Proxy job
is writing to IPtables, more details here so thats how it is
mapped.
Articles which might help with understanding Session Affinity:
https://sookocheff.com/post/kubernetes/building-stateful-services/
https://medium.com/#diegomrtnzg/redirect-your-users-to-the-same-pod-by-using-session-affinity-on-kubernetes-baebf6a1733b
follow the service reference for session affinity
kind: Service
apiVersion: v1
metadata:
name: my-service
spec:
selector:
app: my-app
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
sessionAffinity: ClientIP
sessionAffinityConfig:
clientIP:
timeoutSeconds: 10000