GCE ingress resource is taking too long to receive an IP address in GKE cluster. What could be the reason? - kubernetes

I am trying to deploy a sample application on GKE cluster. All the resources are getting created successfully except the ingress resource which is taking around 15-20 minutes to receive an ip address. By this time application times out and get in errored state. The ideal time to assign the IP addr is 2-3 minutes. Can anyone help on the issue how to debug it?
This is happening specific to a cluster while the same ingress getting the ip within 2 minutes in other clusters in GKE.
Below the manifest files I am using to deploy app:
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-deployment
spec:
selector:
matchLabels:
app: hello
template:
metadata:
labels:
app: hello
spec:
containers:
- name: hello
image: gcr.io/google-samples/hello-app:2.0
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: hello-service
spec:
type: ClusterIP
selector:
app: hello
ports:
- port: 8000
targetPort: 8080
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: zone-printer-deployment
spec:
selector:
matchLabels:
app: zone-printer
template:
metadata:
labels:
app: zone-printer
spec:
containers:
- name: zone-printer
image: gcr.io/google-samples/zone-printer:0.2
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: zone-printer-service
spec:
type: ClusterIP
selector:
app: zone-printer
ports:
- port: 9000
targetPort: 8080
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: awesome-ingress
annotations:
kubernetes.io/ingress.class: "gce-internal"
spec:
defaultBackend:
service:
name: hello-service
port:
number: 8000
rules:
- http:
paths:
- path: /*
pathType: ImplementationSpecific
backend:
service:
name: hello-service
port:
number: 8000
- path: /zone-printer
pathType: ImplementationSpecific
backend:
service:
name: zone-printer-service
port:
number: 9000

Related

Azure AKS Application Gateway 502 bad gateway

I have been following the tutorial here:
MS Azure
This is fine. However deploying a local config file I get a "502 Gate Way" error. This config has been fine and works as expected.
Can anyone see anything obvious with this: At this point I don't know where to start.
I am trying to achieve using the ingress controller that is Application gateway. Then add deployments and apply additional ingress rules
apiVersion: apps/v1
kind: Deployment
metadata:
name: one-api
namespace: default
annotations:
imageregistry: "gcr.io/google-samples/hello-app:1.0"
spec:
replicas: 3
selector:
matchLabels:
run: one-api
template:
metadata:
labels:
run: one-api
spec:
containers:
- image: gcr.io/google-samples/hello-app:1.0
imagePullPolicy: IfNotPresent
name: one-api
ports:
- containerPort: 80
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: one-api
namespace: default
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
run: one-api
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: two-api
namespace: default
annotations:
imageregistry: "gcr.io/google-samples/hello-app:1.0"
spec:
replicas: 3
selector:
matchLabels:
run: two-api
template:
metadata:
labels:
run: two-api
spec:
containers:
- image: gcr.io/google-samples/hello-app:1.0
imagePullPolicy: IfNotPresent
name: two-api
ports:
- containerPort: 80
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: two-api
namespace: default
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
run: two-api
type: NodePort
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: strata-2022
labels:
app: my-docker-apps
annotations:
kubernetes.io/ingress.class: azure/application-gateway
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: one-api
port:
number: 80
- path: /two-api
pathType: Prefix
backend:
service:
name: two-api
port:
number: 80
Output of: kubectl describe ingress strata-2022
Name: strata-2022
Labels: app=my-docker-apps
Namespace: default
Address: 51.142.191.83
Ingress Class:
Default backend:
Rules:
Host Path Backends
/ one-api:80 (10.224.0.15:80,10.224.0.59:80,10.224.0.94:80)
/two-api two-api:80 (10.224.0.13:80,10.224.0.51:80,10.224.0.82:80)
Annotations: kubernetes.io/ingress.class: azure/application-gateway
Events:
kubectl describe ingress
Name: strata-2022
Labels: app=my-docker-apps
Namespace: default
Address: 51.142.191.83
Ingress Class: <none>
Default backend: <default>
Rules:
Host Path Backends
---- ---- --------
*
/ one-api:80 (10.224.0.15:80,10.224.0.59:80,10.224.0.94:80)
/two-api two-api:80 (10.224.0.13:80,10.224.0.51:80,10.224.0.82:80)
Annotations: kubernetes.io/ingress.class: azure/application-gateway
Events: <none>
Commands used to create AKS using Azure CLI.
az aks create -n myCluster -g david-tutorial --network-plugin azure --enable-managed-identity -a ingress-appgw --appgw-name testApplicationGateway --appgw-subnet-cidr "10.225.0.0/16" --generate-ssh-keys
// Get credentials and switch to this context
az aks get-credentials -n myCluster -g david-tutorial
// This line is from the tutorial -- this works as expected
//kubectl apply -f https://raw.githubusercontent.com/Azure/application-gateway-kubernetes-ingress/master/docs/examples/aspnetapp.yaml
// This is what i ran. It works locally
kubectl apply -f nano new-deploy.yaml
// Get address
kubectl get ingress
kubectl get configmap
I tried recreating the same setup on my end, and I could identify the following issue right after running the same az aks create command: All the instances in one or more of your backend pools are unhealthy.
Since this appeared to indicate that the backend pools are unreachable, it was strange at first so I tried to look at the logs of one of the pods based on the hello-app images you were using and noticed this right away:
> kubectl logs one-api-77f9b4b9f-6sv6f
2022/08/12 00:22:04 Server listening on port 8080
Hence, my immediate thought was that maybe in the Docker image that you are using, nothing is configured to listen on port 80, which is the port you are using in your kubernetes resources definition.
After updating your Deployment and Service definitions to use port 8080 instead of 80, everything worked perfectly fine and I started getting the following response in my browser:
Hello, world!
Version: 1.0.0
Hostname: one-api-d486fbfd7-pm8kt
Below you can find the updated YAML file that I used to successfully deploy all the resources:
apiVersion: apps/v1
kind: Deployment
metadata:
name: one-api
namespace: default
annotations:
imageregistry: "gcr.io/google-samples/hello-app:1.0"
spec:
replicas: 3
selector:
matchLabels:
run: one-api
template:
metadata:
labels:
run: one-api
spec:
containers:
- image: gcr.io/google-samples/hello-app:1.0
imagePullPolicy: IfNotPresent
name: one-api
ports:
- containerPort: 8080
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: one-api
namespace: default
spec:
ports:
- port: 8080
protocol: TCP
targetPort: 8080
selector:
run: one-api
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: two-api
namespace: default
annotations:
imageregistry: "gcr.io/google-samples/hello-app:1.0"
spec:
replicas: 3
selector:
matchLabels:
run: two-api
template:
metadata:
labels:
run: two-api
spec:
containers:
- image: gcr.io/google-samples/hello-app:1.0
imagePullPolicy: IfNotPresent
name: two-api
ports:
- containerPort: 8080
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: two-api
namespace: default
spec:
ports:
- port: 8080
protocol: TCP
targetPort: 8080
selector:
run: two-api
type: NodePort
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: strata-2022
labels:
app: my-docker-apps
annotations:
kubernetes.io/ingress.class: azure/application-gateway
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: one-api
port:
number: 8080
- path: /two-api
pathType: Prefix
backend:
service:
name: two-api
port:
number: 8080

Accessing application inside a kubernetes pod from an another application in a different pod

I have a kubernetes cluster having two deployments ui-service-app and user-service-app. Both of the deployments are exposed through Cluster IP services namely ui-service-svc and user-service-svc. In addition there is a Ingress for accessing both of my applications inside those deployments from outside the cluster.
Now I want to make a api call from my application inside ui-service-app to user-service-app. Currently I am using the ingress-ip/user to do so. But there should be some way to do this internally?
apiVersion: apps/v1
kind: Deployment
metadata:
name: user-service-app
labels:
app: user-service-app
spec:
replicas: 1
selector:
matchLabels:
app: user-service-app
template:
metadata:
labels:
app: user-service-app
spec:
containers:
- name: user-service-app
image: <MY-IMAGE-URL>
imagePullPolicy: Always
ports:
- containerPort: 3000
livenessProbe:
httpGet:
path: /ping
port: 3000
readinessProbe:
httpGet:
path: /ping
port: 3000
---
apiVersion: "v1"
kind: "Service"
metadata:
name: "user-service-svc"
namespace: "default"
labels:
app: "user-service-app"
spec:
type: "ClusterIP"
selector:
app: "user-service-app"
ports:
- protocol: "TCP"
port: 80
targetPort: 3000
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: ui-service-app
labels:
app: ui-service-app
spec:
replicas: 1
selector:
matchLabels:
app: ui-service-app
template:
metadata:
labels:
app: ui-service-app
spec:
containers:
- name: ui-service-app
image: <MY-IMAGE-URL>
imagePullPolicy: Always
ports:
- containerPort: 3000
---
apiVersion: "v1"
kind: "Service"
metadata:
name: "ui-service-svc"
namespace: "default"
labels:
app: "ui-service-app"
spec:
type: "ClusterIP"
selector:
app: "ui-service-app"
ports:
- protocol: "TCP"
port: 80
targetPort: 3000
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: awesome-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
ingressClassName: nginx
defaultBackend:
service:
name: ui-service-svc
port:
number: 80
rules:
- http:
paths:
- path: /login
pathType: Prefix
backend:
service:
name: ui-service-svc
port:
number: 80
- path: /user(/|$)(.*)
pathType: Prefix
backend:
service:
name: user-service-svc
port:
number: 80
UPDATE 1:
THIS IS THE ERROR PAGE WHEN I CHANGE THE URL IN REACT APP TO HTTP://USER-SERVICE-SVC
Use the service name of the associated service.
From any other pod in the same namespace, the hostname user-service-svc will map to the Service you've defined, so http://user-service-svc would connect you to the web server of the user-service-app Deployment (no port specified, because your Service is mapping port 80 to container port 3000).
From another namespace, you can use the hostname <service>.<namespace>.svc.cluster.local, but that's not relevant to what you're doing here.
See the Service documentation for more details.

Expose rabbitmq managment via kubernetes ingrress

I have a kubernetes cluster with a deployment of rabbitmq. I want to expose the rabbitmanagment UI in that way I can access to it in my browser. To do that I have a deployment, service and ingress file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: rabbitmq
spec:
replicas: 1
selector:
matchLabels:
app: rabbitmq
template:
metadata:
labels:
app: rabbitmq
spec:
containers:
- image: rabbitmq:3.8.9-management
name: rabbitmq
ports:
- containerPort: 5672
- containerPort: 15672
resources: {}
restartPolicy: Always
The service:
apiVersion: v1
kind: Service
metadata:
name: rabbitmq
spec:
ports:
- name: "5672"
port: 5672
targetPort: 5672
- name: "15672"
port: 15672
targetPort: 15672
selector:
app: rabbitmq
Ingress file
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress
spec:
rules:
- http:
paths:
- path: /rabbitmq
pathType: Prefix
backend:
service:
name: rabbitmq
port:
number: 15672
When I type http://localhost/rabbitmq in my browser I get this nginx error: {"error":"Object Not Found","reason":"Not Found"}
But when I enter in some other pod and I type: curl http://rabbitmq:15672 It get the a response of the website.
Im new to kubernetes, I havent found any relevant solution to my problem, If someone could help me I would very grateful!!
Thanks for reading.
Try:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
ingressClassName: nginx # <-- assumed you only have 1 ingress-nginx
rules:
- http:
paths:
- path: /rabbitmq(/|$)(.*)
...
Request to http://localhost/rabbitmq will be seen by your rabbitmq service as /

Why am I getting 502 errors on my ALB end points, targeted at EKS hosted services

I am building a service in EKS that has two deployments, two services (NodePort) , and a single ingress.
I am using the aws-alb-ingress-controller.
When I run kubectl port-forward POD 8080:80 It does show me my working pods.
When I look at the generated endpoints by the alb I get 502 errors.
When I look at the Registered Targets of the target group I am seeing the message, Health checks failed with these codes: [502]
Here is my complete yaml.
---
#Example game deployment and service
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: "example-game"
namespace: "example-app"
spec:
replicas: 5
template:
metadata:
labels:
app: "example-game"
spec:
containers:
- image: alexwhen/docker-2048
imagePullPolicy: Always
name: "example-game"
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: "service-example-game"
namespace: "example-app"
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
type: NodePort
selector:
app: "example-app"
#Example nginxdemo Deployment and Service
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: "example-nginxdemo"
namespace: "example-app"
spec:
replicas: 5
template:
metadata:
labels:
app: "example-nginxdemo"
spec:
containers:
- image: nginxdemos/hello
imagePullPolicy: Always
name: "example-nginxdemo"
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: "service-example-nginxdemo"
namespace: "example-app"
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
type: NodePort
selector:
app: "example-app"
---
#Shared ALB ingress
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: "example-ingress"
namespace: "example-app"
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/healthcheck-protocol: HTTP
Alb.ingress.kubernetes.io/healthcheck-port: traffic-port
alb.ingress.kubernetes.io/healthcheck-path: /
# alb.ingress.kubernetes.io/scheme: internal
# alb.ingress.kubernetes.io/load-balancer-attributes: routing.http2.enabled=true
labels:
app: example-app
spec:
rules:
- http:
paths:
- path: /game/*
backend:
serviceName: "service-example-game"
servicePort: 80
- path: /nginxdemo/*
backend:
serviceName: "service-example-nginxdemo"
servicePort: 80
I don't know why but it turns out that the label given to to ingress has to be unique.
When I changed the label from 'example-app' to 'example-app-ingress' it just started working.

K8s ingress with 2 domains both listening on port 80

I am trying to replicate a Name based virtual hosting with two docker images in one deployment. Unfortunately I am only able to get 1 running due to a port conflict:
2019/03/19 20:37:52 [ERR] Error starting server: listen tcp :5678: bind: address already in use
Is it really not possible to have two images listening on the same port as part of the same deployment? Or am I going wrong elsewhere?
Minimal example adapted from here
# set up ingress
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/mandatory.yaml
# set up load balancer
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/cloud-generic.yaml
# spin up two containers in one deployment, same container port
kubectl apply -f test.yaml
test.yaml:
apiVersion: v1
kind: Service
metadata:
name: echo1
spec:
ports:
- port: 80
targetPort: 5678
selector:
app: echo1
---
apiVersion: v1
kind: Service
metadata:
name: echo2
spec:
ports:
- port: 80
targetPort: 5678
selector:
app: echo2
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: echo12
spec:
selector:
matchLabels:
app: echo12
replicas: 1
template:
metadata:
labels:
app: echo12
spec:
containers:
- name: echo1
image: hashicorp/http-echo
args:
- "-text=echo1"
ports:
- containerPort: 5678
- name: echo2
image: hashicorp/http-echo
args:
- "-text=echo2"
ports:
- containerPort: 5678
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: echo-ingress
spec:
rules:
- host: echo1.example.com
http:
paths:
- backend:
serviceName: echo1
servicePort: 80
- host: echo2.example.com
http:
paths:
- backend:
serviceName: echo2
servicePort: 80
Update:
If I add a separate deployment it works. Is this by design or is there any way that I can achieve this in one deployment (reason: I'd like to be able to reset all deployed domains at once)?
Problem 1: Creating two different service backends in one pod of one deployment. This is not what the pods are designed for. If you want to expose multiple services, you should have a pod (at least) to back each service. Deployments wrap around the pod by allowing you to define replication and liveliness options. In your case, you should have one deployment (which creates one or multiple pods that will respond to one echo request) for its corresponding service.
Problem 2: You are not linking your services to your backends properly. The service clearly is trying to select a label app=echo or app=echo2. In your deployment, your app=echo12. Consequently, the service simply won't be able to find any active endpoints.
To address the above problems, try this below:
apiVersion: v1
kind: Service
metadata:
name: echo1
spec:
ports:
- port: 80
targetPort: 5678
selector:
app: echo1
---
apiVersion: v1
kind: Service
metadata:
name: echo2
spec:
ports:
- port: 80
targetPort: 5678
selector:
app: echo2
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: echo1
spec:
selector:
matchLabels:
app: echo1
replicas: 1
template:
metadata:
labels:
app: echo1
spec:
containers:
- name: echo1
image: hashicorp/http-echo
args:
- "-text=echo1"
ports:
- containerPort: 5678
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: echo2
spec:
selector:
matchLabels:
app: echo2
replicas: 1
template:
metadata:
labels:
app: echo2
spec:
containers:
- name: echo2
image: hashicorp/http-echo
args:
- "-text=echo2"
ports:
- containerPort: 5678
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: echo-ingress
spec:
rules:
- host: echo1.example.com
http:
paths:
- backend:
serviceName: echo1
servicePort: 80
- host: echo2.example.com
http:
paths:
- backend:
serviceName: echo2
servicePort: 80
I have tested the above in my own cluster and verified that it is working (with different ingress urls of course). Hope this helps!