Kubernetes ERR_NAME_NOT_RESOLVED - kubernetes

I have two deployments, one for backend and one for frontend and two services for them. Frontend Service is set as a LoadBalancer and it is exposed as expected(using minikube tunnel). Backend service shouldnt be exposed outside of the cluster, therefore I didnt set any type of service(default is ClusterIP which is available only within a cluster). Now I would like to make calls from frontend to backend. When I type
kubectl exec -it FRONT_END_POD_NAME -- /bin/sh
And then use curl I can get all resources which I expect, however, when I open my website application which fetches the same resource as I type in curl, there is an error in console net::ERR_NAME_NOT_RESOLVED. Do You have any idea why is that happening, even when I am able to curl it from my frontend and everything works? How to fix it?
apiVersion: apps/v1
kind: Deployment
metadata:
name: backend-deployment
labels:
app: backend
spec:
selector:
matchLabels:
app: backend
replicas: 1
template:
metadata:
labels:
app: backend
spec:
containers:
- name: backend
image: ajris/site_backend:pr-kubernetes
ports:
- containerPort: 8080
====
apiVersion: v1
kind: Service
metadata:
name: backend-service
labels:
app: backend
spec:
selector:
app: backend
ports:
- protocol: TCP
port: 8081
targetPort: 8080
===
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend-deployment
labels:
app: frontend
spec:
selector:
matchLabels:
app: frontend
replicas: 1
template:
metadata:
labels:
app: frontend
spec:
containers:
- name: frontend
image: ajris/site_frontend:travis-66
ports:
- containerPort: 3000
imagePullPolicy: Always
===
apiVersion: v1
kind: Service
metadata:
name: frontend-service
labels:
app: frontend
spec:
ports:
- protocol: TCP
port: 3001
targetPort: 3000
selector:
app: frontend
type: LoadBalancer

Since this doesn't have an section answer yet, might as well make it easier for others to find it in the future. The person who asked the question answered it themselves here in the comments of the question.
Essentially what's happening here is, that React doesn't execute on the frontend pod, but in the browser of the client. Thus any request that tries to contact the backend pod will fail if it's not reachable by the client such as is the case with a backend pod with a ClusterIP service.
To solve this issue, the backend server needs to be accessible from the client. This can be done by changing the service that is associated with the backend pod to a LoadBalancer or by using an ingress.

Related

How does gateway connect other services in Kubernetes?

I am attaching the image of my application flow. Here the Gateway and other services are created using NestJS. The request for any API comes through the gateway.
The Gateway-pod and API-pod communicate using TCP protocol.
After deployment the Gateway is not able to discover any API pods.
I am attaching the YAML image file also for both Gateway & Pods.
Please do let me know what mistake I am doing in the YAML file.
**APPLICATION DIAGRAM**
Gateway YAML
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: roushan
name: gateway-deployment
spec:
replicas: 1
selector:
matchLabels:
app: roushan-app
template:
metadata:
labels:
app: roushan-app
spec:
containers:
- name: gateway-container
image: nest-api-gateway:v8
ports:
- containerPort: 1000
apiVersion: v1
kind: Service
metadata:
namespace: roushan
name: gateway-svc
spec:
selector:
app: roushan-app
ports:
- name: gateway-svc-container
protocol: TCP
port: 80
targetPort: 1000
type: LoadBalancer
Pod YAML
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: roushan
name: pod1-deployment
spec:
replicas: 1
selector:
matchLabels:
app: roushan-app
template:
metadata:
labels:
app: roushan-app
spec:
containers:
- name: pod1-container
image: nest-api-pod1:v2
ports:
- containerPort: 4000
apiVersion: v1
kind: Service
metadata:
namespace: roushan
name: pod1-srv
spec:
selector:
app: roushan-app
ports:
- name: pod1-svc-container
protocol: TCP
port: 80
targetPort: 4000
the gateway should be able to access the services by their DNS name. for example pod1-srv.svc.cluster.local, if this does not work you may need to look at the Kubernetes DNS setup.
I have not used AKS, they may use a different domain name for the cluster other than svc.cluster.local
YAML Points
Ideally, you should be keeping the different selectors across the deployment.
You are using the same selectors for both deployments. Gateway and application deployment.
Service will forward the traffic to deployment based on selectors and labels, this might redirect the service-2 request to POD-1.
Networking
You gateway service(Pods) connect to internal service by just service-name like : pod1-srv if in same namespaces.
if gateway and application in different namespaces you have to call each other like http://<servicename>.<namespace>.svc.cluster.local

Can't make Vue Axios request from one pod to another pod's service, but I can curl from another pod to the service

I have a backend pod with a SpringBoot application, and a frontend pod with a Vue application running. I've been taking things step by step to learn more about Kubernetes. The goal right now, is to make a GET request from the Vue app to my SpringBoot App. This works fine if I run both apps in their respective environment locally (npm build run, and just running the SpringBoot app).
I've then moved them into pod containers backed by services in Kubernetes. Both work on their own, but I can't get my Vue application to make the GET request with Axios. (And I have allowed all CORS origin access, so that shouldn't be the problem atleast)
I've been looking into other questions, and also reading a bunch of documentation. I am not sure if running Kubernetes locally can do this, or if I need to setup a domain name? I've seen a bunch of ingress examples where they make use of hostnames, and I've also looked into environment variables.
I don't have a domain, I'm running my kubernetes cluster on Docker
I have environment variables inside my Vue container, one that I've made myself and some that was made by Kubernetes itself that actually points to the local IP of my backend/SpringBoot's pod's IP. (Which basically is RESTSERVICE_PORT_8080_ADDR, RESTSERVICE_SERVICE_PORT_8080_8080)
It's my understanding that I should be able to keep the backend pod closed from the outside. By referencing the service, I should be able to make the GET request. I can deploy a pod that sends a CURL to the backend's servicename, which works. But using the very same adress in my Vue app does not work. Why is that? I've read it's something about the browser + my Vue app needing access but how do I set that up correctly?
So basically I've looked into using environment variables, which I can't figure out how to make use of inside the Vue app in the container. And I've also tried just reaching out to the service's adress which works with a CURL, but not from the Vue AXIOS request.
I can also configure Ingress, but this opens my backend service/pod up to the public as well, which I understand is something I should be able to refrain from doing. Like I mentionned, ideally I should be able to reach the service of the backend pod without opening it up to the public right?
Here is the pod + service configuration for my backend code. It's a YAML file that was autoconfigured via some Spring guide if I remember correctly and I've just reused it since. It has some boilerplate so sorry about that:
SpringBoot pod + service configuration YAML:
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: restservice
name: restservice
spec:
replicas: 1
selector:
matchLabels:
app: restservice
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: restservice
spec:
containers:
- name: restservice
image: restservice
imagePullPolicy: Never
status: {}
---
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
app: restservice
name: restservice
spec:
ports:
- name: 8080-8080
port: 8080
protocol: TCP
targetPort: 8080
selector:
app: restservice
type: ClusterIP
status:
loadBalancer: {}
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
app: restservice
name: restservice
spec:
ports:
- name: 8080-8080
port: 8080
protocol: TCP
targetPort: 8080
selector:
app: restservice
type: ClusterIP
status:
loadBalancer: {}
And here is the YAML configuration for my frontend application, which is basically the same configuration except here I am now using a LoadBalancer as the type of the service.
Vue pod + service configuration YAML:
apiVersion: apps/v1 kind: Deployment metadata: creationTimestamp: null labels:
app: vuejs name: vuejs spec: replicas: 1 selector:
matchLabels:
app: vuejs strategy: {} template:
metadata:
creationTimestamp: null
labels:
app: vuejs
spec:
containers:
- name: vuejs
image: vuejs
env:
- name: RESTSERVICE_API_URL
value: "restservice:8080"
imagePullPolicy: Never status: {}
--- apiVersion: v1 kind: Service metadata: creationTimestamp: null labels:
app: vuejs name: vuejs spec: ports:
- name: 8082-8082
port: 8082
protocol: TCP
targetPort: 8082 selector:
app: vuejs type: LoadBalancer status: loadBalancer: {} apiVersion: v1 kind: Service metadata: creationTimestamp: null labels:
app: vuejs name: vuejs spec: ports:
- name: 8082-8082
port: 8082
protocol: TCP
targetPort: 8082 selector:
app: vuejs type: LoadBalancer status: loadBalancer: {}
My call via Axios in Vue is just a regular GET request. I can provide it if needed but like I said, it worked when running the applications locally, it's just not working within the Kubernetes cluster.

How to contact a random pod for one service (mutli instance of the same pod) in the same node in kubernetes

I have one single node. On this node, I have 2 applications with multi instance (3 pods each)
App A want contact App B.
My issue is : App A contact all the time the same pod of App B.
I would like alternate pod (load balancing in round robin for example)
For example :
first request: AppPod3 respond
second request: AppPod1 respond
third request: AppPod2 respond
How i can do that ?
Thank you so much for your help ...
You can see below my conf of app B
I have tried to set timeoutSeconds for sessionaffinity but it's not working ...
kind: Deployment
metadata:
name: AppB
spec:
selector:
matchLabels:
app: AppB
replicas: 3
template:
metadata:
labels:
app: AppB
spec:
containers:
- name: AppB-container
image: image
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: AppB
labels:
app: svc-AppB
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: AppB
sessionAffinity: ClientIP
sessionAffinityConfig:
clientIP:
timeoutSeconds: 1```
The service should use backend pods in round robin fashion by default. You don't need sessionAffinity settings if the pods are stateless; otherwise you will be redirected to the same pod based on the source ID.
Maybe you can add logging to the pods and observe when they are accessed. Subsequent calls to the service should be redirected to pods in round robin fashion with minimal service configuration.
Update: this is the deployment I am using. It balances the pods as expected; each curl request sent to the service clusterip:port ends on a different pod. My k8s installation is on premise, v1.18.3.
apiVersion: apps/v1
kind: Deployment
metadata:
name: baseDeployment
labels:
app: baseApp
namespace: nm-app1
spec:
replicas: 2
selector:
matchLabels:
app: baseApp
template:
metadata:
labels:
app: baseApp
spec:
containers:
- name: baseApp
image: local-registry:5000/baseApp
ports:
- containerPort: 8080
---
kind: Service
apiVersion: v1
metadata:
name: baseService
namespace: nm-app1
spec:
selector:
app: baseApp
ports:
- protocol: TCP
port: 80
targetPort: 8080
#user3683760 you will have to apply load balancing strategies in your deployment. Kube has bunch of choices for balancing the traffic between your services. I would suggest play around with few patterns and see what best suit your needs.

can not access to service in postman

i create a image of my project in nodejs .
i write Development yaml :
apiVersion: apps/v1
kind: Deployment
metadata:
name: user-depl
labels:
app: user-depl
spec:
replicas: 3
selector:
matchLabels:
app: user-pod
departemant: user
template:
metadata:
labels:
app: user-pod
departemant: user
spec:
containers:
- name : user-pod
image: kia9372/store-user
ports:
- containerPort: 4000
and create a Services :
apiVersion: v1
kind: Service
metadata:
name: user-service
labels:
app: user-service
spec:
selector:
app: user-pod
ports:
- name: user
protocol: TCP
port: 4000
targetPort: 4000
and when i run this command kubectl get services it show me this :
user-service ClusterIP 10.109.72.253 4000/TCP 36m
and when i send request to this service in postman it not show me any thing .
https://10.109.72.253:4000/user
it show me this error :
Error: connect ETIMEDOUT 10.109.72.253:4000
whats the problem ? how can i solve this problem ???
Your service is not of type load balancer so you will not be able to access it.
Since you did not specify any other type ClusterIP is the default for the Service.
ClusterIP
Exposes the Service on a cluster-internal IP.
Choosing this value makes the Service only reachable from within the cluster
Here is a sample code on how to access it using NoodePort/ClusterIP
https://github.com/nirgeier/KubernetesLabs/tree/master/Labs/05-Services
Img source: https://medium.com/avmconsulting-blog/service-types-in-kubernetes-24a1587677d6

Kubernetes Communication between services

For learning purposes, i have two services in a cluster on google cloud:
API Service with the following k8 config:
deployment.yaml
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: myapp-api
labels:
app: myapp-api
spec:
replicas: 1
template:
metadata:
labels:
app: myapp-api
spec:
containers:
- image: gcr.io/prefab-basis-213412/myapp-api:0.0.1
name: myapp-api
ports:
- containerPort: 3000
service.yaml
kind: Service
apiVersion: v1
metadata:
name: myapp-api
spec:
selector:
app: myapp-api
ports:
- protocol: TCP
port: 80
targetPort: 3000
And a second service, called frontend, that's publicly exposed:
deployment.yaml
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: myappfront-deployment
spec:
replicas: 1
template:
metadata:
labels:
app: myappfront
spec:
containers:
- image: gcr.io/prefab-basis-213412/myappfront:0.0.11
name: myappfront-deployment
ports:
- containerPort: 3000
service.yaml
apiVersion: v1
kind: Service
metadata:
name: myappfront-service
spec:
type: LoadBalancer
selector:
app: myappfront
ports:
- port: 80
targetPort: 3000
The front service is basically a nodejs app that only does a rest call to the api service like so axios.get('http://myapp-api').
The issue is that the call is failing and it's unable to find the requested endpoint. Is there any additional config that i'm currently missing for the API service to be discovered?
P.S. Both services are running and I can connect to them by proxying from localhost.
Since you're able to hit the services when proxying it sounds like you've already gone through most of the debugging steps for in-cluster issues (https://kubernetes.io/docs/tasks/debug-application-cluster/debug-service/). That suggests the problem might not be within the cluster. Something to watch out for with frontends is where the http call takes place. It could be in the server with node but given you're seeing this issue I'd suggest that it's in the browser. (If you see can see the call in the browser console then it is.) If the frontend's call is being made in the browser then it doesn't have access to the Kubernetes dns and the cluster-internal service name won't resolve. To handle this you could make the backend service LoadBalancer and pass the external name into the frontend.