Need n different endpoints for each replica in statefulset - kubernetes

I have a statefulset of n replicas of my web service. I want to expose all n replicas of application externally through ingress.
kind: StatefulSet
metadata:
name: webapp
spec:
selector:
matchLabels:
app: web
serviceName: web-svc
replicas: 3
template:
metadata:
labels:
app: web
spec:
terminationGracePeriodSeconds: 10
containers:
- name: web
image: httpd:2.4
ports:
- containerPort: 80
name: http
I am using below service for that:
kind: Service
metadata:
name: web-svc
labels:
app: web
spec:
clusterIP: none
ports:
- port: 80
name: web
selector:
app: web
Above headless service create 3 different endpoints, but how to expose these endpoints externally through ingress or any other way ?
There is a workaround of creating separate services for each pod name but that would be a manual effort of creating service and corresponding ingress URLs.

Related

How does gateway connect other services in Kubernetes?

I am attaching the image of my application flow. Here the Gateway and other services are created using NestJS. The request for any API comes through the gateway.
The Gateway-pod and API-pod communicate using TCP protocol.
After deployment the Gateway is not able to discover any API pods.
I am attaching the YAML image file also for both Gateway & Pods.
Please do let me know what mistake I am doing in the YAML file.
**APPLICATION DIAGRAM**
Gateway YAML
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: roushan
name: gateway-deployment
spec:
replicas: 1
selector:
matchLabels:
app: roushan-app
template:
metadata:
labels:
app: roushan-app
spec:
containers:
- name: gateway-container
image: nest-api-gateway:v8
ports:
- containerPort: 1000
apiVersion: v1
kind: Service
metadata:
namespace: roushan
name: gateway-svc
spec:
selector:
app: roushan-app
ports:
- name: gateway-svc-container
protocol: TCP
port: 80
targetPort: 1000
type: LoadBalancer
Pod YAML
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: roushan
name: pod1-deployment
spec:
replicas: 1
selector:
matchLabels:
app: roushan-app
template:
metadata:
labels:
app: roushan-app
spec:
containers:
- name: pod1-container
image: nest-api-pod1:v2
ports:
- containerPort: 4000
apiVersion: v1
kind: Service
metadata:
namespace: roushan
name: pod1-srv
spec:
selector:
app: roushan-app
ports:
- name: pod1-svc-container
protocol: TCP
port: 80
targetPort: 4000
the gateway should be able to access the services by their DNS name. for example pod1-srv.svc.cluster.local, if this does not work you may need to look at the Kubernetes DNS setup.
I have not used AKS, they may use a different domain name for the cluster other than svc.cluster.local
YAML Points
Ideally, you should be keeping the different selectors across the deployment.
You are using the same selectors for both deployments. Gateway and application deployment.
Service will forward the traffic to deployment based on selectors and labels, this might redirect the service-2 request to POD-1.
Networking
You gateway service(Pods) connect to internal service by just service-name like : pod1-srv if in same namespaces.
if gateway and application in different namespaces you have to call each other like http://<servicename>.<namespace>.svc.cluster.local

How to contact a random pod for one service (mutli instance of the same pod) in the same node in kubernetes

I have one single node. On this node, I have 2 applications with multi instance (3 pods each)
App A want contact App B.
My issue is : App A contact all the time the same pod of App B.
I would like alternate pod (load balancing in round robin for example)
For example :
first request: AppPod3 respond
second request: AppPod1 respond
third request: AppPod2 respond
How i can do that ?
Thank you so much for your help ...
You can see below my conf of app B
I have tried to set timeoutSeconds for sessionaffinity but it's not working ...
kind: Deployment
metadata:
name: AppB
spec:
selector:
matchLabels:
app: AppB
replicas: 3
template:
metadata:
labels:
app: AppB
spec:
containers:
- name: AppB-container
image: image
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: AppB
labels:
app: svc-AppB
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: AppB
sessionAffinity: ClientIP
sessionAffinityConfig:
clientIP:
timeoutSeconds: 1```
The service should use backend pods in round robin fashion by default. You don't need sessionAffinity settings if the pods are stateless; otherwise you will be redirected to the same pod based on the source ID.
Maybe you can add logging to the pods and observe when they are accessed. Subsequent calls to the service should be redirected to pods in round robin fashion with minimal service configuration.
Update: this is the deployment I am using. It balances the pods as expected; each curl request sent to the service clusterip:port ends on a different pod. My k8s installation is on premise, v1.18.3.
apiVersion: apps/v1
kind: Deployment
metadata:
name: baseDeployment
labels:
app: baseApp
namespace: nm-app1
spec:
replicas: 2
selector:
matchLabels:
app: baseApp
template:
metadata:
labels:
app: baseApp
spec:
containers:
- name: baseApp
image: local-registry:5000/baseApp
ports:
- containerPort: 8080
---
kind: Service
apiVersion: v1
metadata:
name: baseService
namespace: nm-app1
spec:
selector:
app: baseApp
ports:
- protocol: TCP
port: 80
targetPort: 8080
#user3683760 you will have to apply load balancing strategies in your deployment. Kube has bunch of choices for balancing the traffic between your services. I would suggest play around with few patterns and see what best suit your needs.

Reuse Load Balancer for K8s Services

I have just set up my first K8s cluster in Oracle Cloud. And have a website running in it.
Is there a way to use one LB instead of creating one for each K8s service?
Take a look at this code from the Oracle documentation
Here we create a LB only for this service. I would like to create one LB for my K8s Services so I only have to set up TSL in one place. So can I in the Deployment file point to an existing LB or do I just create the service and then point the LB at the service?
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-nginx
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: my-nginx-svc
labels:
app: nginx
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: nginx
This isn't possible: OKE will always create a new load balancer for each new exposed service.
Regards

Kubernetes ERR_NAME_NOT_RESOLVED

I have two deployments, one for backend and one for frontend and two services for them. Frontend Service is set as a LoadBalancer and it is exposed as expected(using minikube tunnel). Backend service shouldnt be exposed outside of the cluster, therefore I didnt set any type of service(default is ClusterIP which is available only within a cluster). Now I would like to make calls from frontend to backend. When I type
kubectl exec -it FRONT_END_POD_NAME -- /bin/sh
And then use curl I can get all resources which I expect, however, when I open my website application which fetches the same resource as I type in curl, there is an error in console net::ERR_NAME_NOT_RESOLVED. Do You have any idea why is that happening, even when I am able to curl it from my frontend and everything works? How to fix it?
apiVersion: apps/v1
kind: Deployment
metadata:
name: backend-deployment
labels:
app: backend
spec:
selector:
matchLabels:
app: backend
replicas: 1
template:
metadata:
labels:
app: backend
spec:
containers:
- name: backend
image: ajris/site_backend:pr-kubernetes
ports:
- containerPort: 8080
====
apiVersion: v1
kind: Service
metadata:
name: backend-service
labels:
app: backend
spec:
selector:
app: backend
ports:
- protocol: TCP
port: 8081
targetPort: 8080
===
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend-deployment
labels:
app: frontend
spec:
selector:
matchLabels:
app: frontend
replicas: 1
template:
metadata:
labels:
app: frontend
spec:
containers:
- name: frontend
image: ajris/site_frontend:travis-66
ports:
- containerPort: 3000
imagePullPolicy: Always
===
apiVersion: v1
kind: Service
metadata:
name: frontend-service
labels:
app: frontend
spec:
ports:
- protocol: TCP
port: 3001
targetPort: 3000
selector:
app: frontend
type: LoadBalancer
Since this doesn't have an section answer yet, might as well make it easier for others to find it in the future. The person who asked the question answered it themselves here in the comments of the question.
Essentially what's happening here is, that React doesn't execute on the frontend pod, but in the browser of the client. Thus any request that tries to contact the backend pod will fail if it's not reachable by the client such as is the case with a backend pod with a ClusterIP service.
To solve this issue, the backend server needs to be accessible from the client. This can be done by changing the service that is associated with the backend pod to a LoadBalancer or by using an ingress.

Kubernetes Communication between services

For learning purposes, i have two services in a cluster on google cloud:
API Service with the following k8 config:
deployment.yaml
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: myapp-api
labels:
app: myapp-api
spec:
replicas: 1
template:
metadata:
labels:
app: myapp-api
spec:
containers:
- image: gcr.io/prefab-basis-213412/myapp-api:0.0.1
name: myapp-api
ports:
- containerPort: 3000
service.yaml
kind: Service
apiVersion: v1
metadata:
name: myapp-api
spec:
selector:
app: myapp-api
ports:
- protocol: TCP
port: 80
targetPort: 3000
And a second service, called frontend, that's publicly exposed:
deployment.yaml
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: myappfront-deployment
spec:
replicas: 1
template:
metadata:
labels:
app: myappfront
spec:
containers:
- image: gcr.io/prefab-basis-213412/myappfront:0.0.11
name: myappfront-deployment
ports:
- containerPort: 3000
service.yaml
apiVersion: v1
kind: Service
metadata:
name: myappfront-service
spec:
type: LoadBalancer
selector:
app: myappfront
ports:
- port: 80
targetPort: 3000
The front service is basically a nodejs app that only does a rest call to the api service like so axios.get('http://myapp-api').
The issue is that the call is failing and it's unable to find the requested endpoint. Is there any additional config that i'm currently missing for the API service to be discovered?
P.S. Both services are running and I can connect to them by proxying from localhost.
Since you're able to hit the services when proxying it sounds like you've already gone through most of the debugging steps for in-cluster issues (https://kubernetes.io/docs/tasks/debug-application-cluster/debug-service/). That suggests the problem might not be within the cluster. Something to watch out for with frontends is where the http call takes place. It could be in the server with node but given you're seeing this issue I'd suggest that it's in the browser. (If you see can see the call in the browser console then it is.) If the frontend's call is being made in the browser then it doesn't have access to the Kubernetes dns and the cluster-internal service name won't resolve. To handle this you could make the backend service LoadBalancer and pass the external name into the frontend.