Kubernetes AWS EKS Failed to load resource: net::ERR_NAME_NOT_RESOLVED - kubernetes

I have an issue with the following AWS EKS deployment, where the front end always get a Failed to load resource: net::ERR_NAME_NOT_RESOLVED from the backend
Failed to load resource: net::ERR_NAME_NOT_RESOLVED
The reason appears to be that the frontend app running from the browser
has no access from the internet to the backend-API in Kubernetes http://restapi-auth-nginx/api/
(see attached browser image)
Here are the details of the configuration
- file: restapi-auth-api.yaml
Description: Backend API using GUNICORN
Details: Correctly download the image and create the pods
I can do kubectl exec -it <popId> /bin/bash
Port 5000
- file: restapi-auth-nginx.yaml
Description: NGINX proxy for the API
Details: Correctly download the image and create the pods
I can do kubectl exec -it <popId> /bin/bash I can also reach the api pod from the nginx pod so this part is working fine
- file: frontend.yaml
Description: NGINX proxy plus Angular App in a multistage deployment
Details: Correctly download the image and create the pods
I can do kubectl exec -it <popId> /bin/ash
I can also reach the api pod from the frontend pod so this part is working fine
However, from the browser, I still get the above error even if all the components appear to be working fine
(see the image of the web site working from the browser)
Let me show you how I can access the api through its NGINX pod from the frontend pod. here are our pods
kubectl get pods
NAME READY STATUS RESTARTS AGE
frontend-7674f4d9bf-jbd2q 1/1 Running 0 35m
restapi-auth-857f94b669-j8m7t 1/1 Running 0 39m
restapi-auth-nginx-5d885c7b69-xt6hf 1/1 Running 0 38m
udagram-frontend-5fbc78956c-nvl8d 1/1 Running 0 41m
Now let's log into one pod and get frontend pod and do a curl to the NGINX proxy that sever the API.
Let's try with this curl request directly from the frontend pod to the Nginx backend
curl --location --request POST 'http://restapi-auth-nginx/api/users/auth/login' \
> --header 'Content-Type: application/json' \
> --data-raw '{
> "email":"david#me.com",
> "password":"SuperPass"
> }'
Now let's log into the frontend pod and see if it works
kubectl exec -it frontend-7674f4d9bf-jbd2q /bin/ash
/usr/share/nginx/html # curl --location --request POST 'http://restapi-auth-nginx/api/users/auth/login' \
> --header 'Content-Type: application/json' \
> --data-raw '{
> "email":"david#me.com",
> "password":"SuperPass"
> }'
{
"auth": true,
"token": "eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJ1c2VyIjoiZGF2aWRAcHlta455aS5vcmciLCJleHAiOjE1OTYwNTk7896.OIkuwLsyLhrlCMTVlccg8524OUMnkJ2qJ5fkj-7J5W0",
"user": "david#me.com"
}
It works perfectly meaning that the frontend is correctly communicating to the restapi-auth-Nginx API reverse proxy
Here in this image, you have the output of multiple commands
Here are the .yaml files
LOAD BALANCER and FRONT END
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend
labels:
app: udagram
spec:
replicas: 1
selector:
matchLabels:
app: udagram
tier: frontend
template:
metadata:
labels:
app : udagram
tier: frontend
spec:
containers:
- name: udagram-frontend
image: pythonss/frontend_udacity_app
resources:
requests:
cpu: 100m
memory: 100Mi
imagePullPolicy: Always
ports:
- containerPort: 80
imagePullSecrets:
- name: regcred
---
apiVersion: v1
kind: Service
metadata:
name: frontend-lb
labels:
app: udagram
tier: frontend
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: udagram
tier: frontend
Nginx reverse proxy for API backend
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.21.0 ()
creationTimestamp: null
labels:
io.kompose.service: restapi-auth-nginx
name: restapi-auth-nginx
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: restapi-auth-nginx
strategy: {}
template:
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.21.0 ()
creationTimestamp: null
labels:
io.kompose.service: restapi-auth-nginx
spec:
containers:
- image: pythonss/restapi_auth_microservice_nginx
imagePullPolicy: Always
name: restapi-auth-nginx-nginx
ports:
- containerPort: 80
resources: {}
imagePullSecrets:
- name: regcred
restartPolicy: Always
serviceAccountName: ""
volumes: null
status: {}
---
apiVersion: v1
kind: Service
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.21.0 ()
creationTimestamp: null
labels:
io.kompose.service: restapi-auth-nginx
name: restapi-auth-nginx
spec:
ports:
- name: "80"
port: 80
targetPort: 80
selector:
io.kompose.service: restapi-auth-nginx
status:
loadBalancer: {}
For brevity, I will not share the API app server .yaml file.
So my questions are:
How could I grant access from the internet to the backend API gateway without exposing the API to the world?
Or should I expose the API, through an LB as so:
apiVersion: v1
kind: Service
metadata:
name: backend-lb
labels:
io.kompose.service: restapi-auth-nginx
spec:
type: LoadBalancer
ports:
- port: 80
selector:
io.kompose.service: restapi-auth-nginx
This will solve the issue as it will expose the API.
However, I now need to add the FrontEnd LB to CORS in the API and the BackEndLB to the FrontEnd so it can make the calls.
Could someone explain how to do otherwise without exposing the APIs?
What are the common patterns for this architetcure?
BR

The solution to this is to just expose the NGINX pods ( note that the docker-compose create 2 images) through a Load balancer service
We need this another yaml:
apiVersion: v1
kind: Service
metadata:
name: backend-lb
labels:
io.kompose.service: restapi-auth-nginx
spec:
type: LoadBalancer
ports:
- port: 80
selector:
io.kompose.service: restapi-auth-nginx
Now we will have 3 services and 2 deployments. This latest service exposes the API to the world
Below is an image for this deployment
As you can see the world has access to this latest services only.
The GNINX and GUNICORN are unreachable from the internet.
Now your frontend application can access the API through the exposed LB (represented in black) inside the Kubernetes

Related

RabbitMQ on Kubernetes - Can't login to the management UI

I'm trying to switch an existing app from docker-compose to Kubernetes (first time using it).
My app is deployed on AWS EKS using Fargate nodes. It runs well, but I would like to access the RabbitMQ management UI for debugging purposes.
The rabbit deployment/services files I am using are the following:
# rabbit-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.26.0 (HEAD)
creationTimestamp: null
labels:
io.kompose.service: rabbit
name: rabbit
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: rabbit
strategy: {}
template:
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.26.0 (HEAD)
creationTimestamp: null
labels:
io.kompose.service: rabbit
spec:
containers:
- image: rabbitmq:3.9.13-management
name: rabbit
ports:
- containerPort: 15672
- containerPort: 5672
- containerPort: 8080
resources: {}
env:
- name: RABBITMQ_DEFAULT_USER
value: "guest"
- name: RABBITMQ_DEFAULT_PASS
value: "guest"
restartPolicy: Always
status: {}
and
# rabbit-service.yaml
apiVersion: v1
kind: Service
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.26.0 (HEAD)
creationTimestamp: null
labels:
io.kompose.service: rabbit
name: rabbit
spec:
type: NodePort
ports:
- name: "15672"
port: 15672
targetPort: 15672
- name: "5672"
port: 5672
targetPort: 5672
- name: "8080"
port: 8080
targetPort: 8080
selector:
io.kompose.service: rabbit
status:
loadBalancer: {}
I also followed the instructions to create a new user:
kubectl exec $(kubectl get pods --selector=io.kompose.service=rabbit -o template --template="{{(index .items 0).metadata.name}}") -- rabbitmqctl add_user test test
kubectl exec $(kubectl get pods --selector=io.kompose.service=rabbit -o template --template="{{(index .items 0).metadata.name}}") -- rabbitmqctl set_user_tags test administrator
kubectl exec $(kubectl get pods --selector=io.kompose.service=rabbit -o template --template="{{(index .items 0).metadata.name}}") -- rabbitmqctl set_permissions -p / test ".*" ".*" ".*"
I can access the webUI on
http://localhost:8001/api/v1/namespaces/default/services/rabbit:15672/proxy/
after activating the proxy with kubectl proxy, however, login with test and test still gives me a Login failed message.
Posting the answer out of comments.
First what kubectl proxy is:
Creates a proxy server or application-level gateway between localhost
and the Kubernetes API server. It also allows serving static content
over specified HTTP path. All incoming data enters through one port
and gets forwarded to the remote Kubernetes API server port, except
for the path matching the static content path.
Also kubectl proxy works with HTTP requests, it does not work with TCP traffic. (this is probably the reason why it did not work).
You can read more in a good answer - kubectl proxy vs kubectl port-forward
Common options to access the service inside the cluster are:
use kubectl port-forward - for local development and testing purposes
use loadbalancer or nodeport service type - more advanced options which can be used across clusters and production environments. Find more about service types.

Kubernetes MetalLB External IP not reachable from browser

I have a nginx deployment with service type LoadBalancer.
I got a external IP which is accessible from master and worker node.
I am not able to access it from browser.
What am I missing?
You can follow the below steps to access it from the browser.
Deploy Nginx in your Kubernetes environment by executing the below YAML file.
kubectl create -f {YAML file location}
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
Execute below nginx-service YAML to access it from the browser.
kubectl create -f {YAML file location}
#Service
#nginx-svc-np.yaml
apiVersion: v1
kind: Service
metadata:
name: nginx-service
labels:
app: nginx
spec:
selector:
app: nginx
type: LoadBalancer
ports:
- port: 80
targetPort: 80
externalIPs:
- 192.168.1.155
Now you can access Nginx from your browser.
http://192.168.1.155/ (Please use your external IP)
I have had the same. But I am running minikube. So, changing minikube driver helped me.

Can't connect with my frontend with kubectl

With kubernetes, I created an ingress with a service like these :
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: syntaxmap2
spec:
backend:
serviceName: testsvc
servicePort: 3000
The service testsvc is already created.
I created a frontend service like these :
apiVersion: v1
kind: Service
metadata:
name: syntaxmapfrontend
spec:
selector:
app: syntaxmap
tier: frontend
ports:
- protocol: "TCP"
port: 7000
targetPort: 7000
type: LoadBalancer
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: syntaxmapfrontend
spec:
selector:
matchLabels:
app: syntaxmap
tier: frontend
track: stable
replicas: 1
template:
metadata:
labels:
app: syntaxmap
tier: frontend
track: stable
spec:
containers:
- name: nginx
image: "gcr.io/google-samples/hello-frontend:1.0"
lifecycle:
preStop:
exec:
command: ["/usr/sbin/nginx","-s","quit"]
When I do these command :
kubectl describe ingress syntaxmap2
I have an Ip adress than i can put in my browser and I have an answer
But when I do these command :
kubctl describe service syntaxmapfrontend
I have an Ip adress with a port and when I try to connect to it with curl, I have a time out.
How can I connect to my kubernet frontend with curl ?
The service is accessible only from within the k8s cluster. You either need to change the type of address from ClusterIP to NodeIP, or use something like kubectl port-forward or kubefwd.
If you need more detailed advice, you'll need to post the output of those commands, or even better, show us how you created the objects.
I have found a way.
I write :
minikube service syntaxmapfrontend
And it open a browser with the right URL.

Kubernetes's LoadBalancer yaml not working even though CLI `expose` function works

This is my Service and Deployment yaml that I am running on minikube:
apiVersion: apps/v1
kind: Deployment
metadata:
name: node-hello-world
labels:
app: node-hello-world
spec:
replicas: 1
selector:
matchLabels:
app: node-hello-world
template:
metadata:
labels:
app: node-hello-world
spec:
containers:
- name: node-hello-world
image: node-hello-world:latest
imagePullPolicy: Never
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: node-hello-world-load-balancer
spec:
type: LoadBalancer
ports:
- protocol: TCP
port: 9000
targetPort: 8080
nodePort: 30002
selector:
name: node-hello-world
Results:
$ minikube service node-hello-world-load-balancer --url
http://192.168.99.101:30002
$ curl http://192.168.99.101:30002
curl: (7) Failed to connect to 192.168.99.101 port 30002: Connection refused
However, running the following CLI worked:
$ kubectl expose deployment node-hello-world --type=LoadBalancer
$ minikube service node-hello-world --url
http://192.168.99.101:30130
$ curl http://192.168.99.101:30130
Hello World!
What am I doing wrong with my LoadBalancer yaml config?
you have configured wrong the service selector
selector:
name: node-hello-world
it should be:
selector:
app: node-hello-world
https://kubernetes.io/docs/tutorials/kubernetes-basics/expose/expose-intro/
you can debug this by describing the service, and seeing that the endpoint list is empty, so there are no pods mapped to your endpoint's service list
kubectl describe svc node-hello-world-load-balancer | grep -i endpoints
Endpoints: <none>

Routing troubleshooting with Kubernetes Ingress

I tried to setup a GKE environment with a frontend pod (cup-fe) and a backend one, used to authenticate the user upon login (cup-auth), but I can't get my ingress to work.
Following is the frontend pod (cup-fe) running nginx with an angular app. I created also a static IP address resolved by "cup.xxx.it" and "cup-auth.xxx.it" dns:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: cup-fe
namespace: default
labels:
app: cup-fe
spec:
replicas: 2
selector:
matchLabels:
app: "cup-fe"
template:
metadata:
labels:
app: "cup-fe"
spec:
containers:
- image: "eu.gcr.io/xxx-cup-yyyyyy/cup-fe:latest"
name: "cup-fe"
dnsPolicy: ClusterFirst
Then is the auth pod (cup-auth):
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: cup-auth
namespace: default
labels:
app: cup-auth
spec:
replicas: 1
selector:
matchLabels:
app: cup-auth
template:
metadata:
labels:
app: cup-auth
spec:
containers:
image: "eu.gcr.io/xxx-cup-yyyyyy/cup-auth:latest"
imagePullPolicy: Always
name: cup-auth
ports:
- containerPort: 8080
protocol: TCP
- containerPort: 8443
protocol: TCP
- containerPort: 8778
name: jolokia
protocol: TCP
- containerPort: 8888
name: management
protocol: TCP
dnsPolicy: ClusterFirst
Then I created two NodePorts to expose the above pods:
kubectl expose deployment cup-fe --type=NodePort --port=80
kubectl expose deployment cup-auth --type=NodePort --port=8080
Last, I created an ingress to route external http requests towards services:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: http-ingress
namespace: default
labels:
app: http-ingress
spec:
rules:
- host: cup.xxx.it
http:
paths:
- path: /*
backend:
serviceName: cup-fe
servicePort: 80
- host: cup-auth.xxx.it
http:
paths:
- path: /*
backend:
serviceName: cup-auth
So, I can reach the frontend pod at http://cup.xxx.it, the angular app redirects me to http://cup-auth.xxx.it/login, but I get only 502 bad request. With kubectl describe ingress command, I can see an unhealthy backend for the cup-auth pod.
I paste a successful output by using cup-auth label:
$ kubectl exec -it cup-fe-7f979bb747-6lqfx wget cup.xxx.it/login
Connecting to cup.xxx.it
login 100% |********************************| 1646 0:00:00 ETA
And then the not working output:
$ kubectl exec -it cup-fe-7f979bb747-6lqfx wget cup-auth.xxx.it/login
Connecting to cup-auth.xxx.it
wget: server returned error: HTTP/1.1 502 Bad Gateway
command terminated with exit code 1
I tried and replicated your setup as much as I could, but did not have any issues.
I can call the cup-auth.testdomain.internal/login normally within and outside the pods.
Usually, the 502 errors occur when the request received to the LB couldn't forward to a backend. Since you mention that you are seeing an unhealthy backend this can be the reason.
This could be due to a wrong configuration of the health checks or a problem with your application.
First I would look at the logs to see the reason the request is failing, and eliminate that there is no issue with the health checks or with the application itself.