K8s ingress with 2 domains both listening on port 80 - kubernetes

I am trying to replicate a Name based virtual hosting with two docker images in one deployment. Unfortunately I am only able to get 1 running due to a port conflict:
2019/03/19 20:37:52 [ERR] Error starting server: listen tcp :5678: bind: address already in use
Is it really not possible to have two images listening on the same port as part of the same deployment? Or am I going wrong elsewhere?
Minimal example adapted from here
# set up ingress
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/mandatory.yaml
# set up load balancer
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/cloud-generic.yaml
# spin up two containers in one deployment, same container port
kubectl apply -f test.yaml
test.yaml:
apiVersion: v1
kind: Service
metadata:
name: echo1
spec:
ports:
- port: 80
targetPort: 5678
selector:
app: echo1
---
apiVersion: v1
kind: Service
metadata:
name: echo2
spec:
ports:
- port: 80
targetPort: 5678
selector:
app: echo2
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: echo12
spec:
selector:
matchLabels:
app: echo12
replicas: 1
template:
metadata:
labels:
app: echo12
spec:
containers:
- name: echo1
image: hashicorp/http-echo
args:
- "-text=echo1"
ports:
- containerPort: 5678
- name: echo2
image: hashicorp/http-echo
args:
- "-text=echo2"
ports:
- containerPort: 5678
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: echo-ingress
spec:
rules:
- host: echo1.example.com
http:
paths:
- backend:
serviceName: echo1
servicePort: 80
- host: echo2.example.com
http:
paths:
- backend:
serviceName: echo2
servicePort: 80
Update:
If I add a separate deployment it works. Is this by design or is there any way that I can achieve this in one deployment (reason: I'd like to be able to reset all deployed domains at once)?

Problem 1: Creating two different service backends in one pod of one deployment. This is not what the pods are designed for. If you want to expose multiple services, you should have a pod (at least) to back each service. Deployments wrap around the pod by allowing you to define replication and liveliness options. In your case, you should have one deployment (which creates one or multiple pods that will respond to one echo request) for its corresponding service.
Problem 2: You are not linking your services to your backends properly. The service clearly is trying to select a label app=echo or app=echo2. In your deployment, your app=echo12. Consequently, the service simply won't be able to find any active endpoints.
To address the above problems, try this below:
apiVersion: v1
kind: Service
metadata:
name: echo1
spec:
ports:
- port: 80
targetPort: 5678
selector:
app: echo1
---
apiVersion: v1
kind: Service
metadata:
name: echo2
spec:
ports:
- port: 80
targetPort: 5678
selector:
app: echo2
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: echo1
spec:
selector:
matchLabels:
app: echo1
replicas: 1
template:
metadata:
labels:
app: echo1
spec:
containers:
- name: echo1
image: hashicorp/http-echo
args:
- "-text=echo1"
ports:
- containerPort: 5678
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: echo2
spec:
selector:
matchLabels:
app: echo2
replicas: 1
template:
metadata:
labels:
app: echo2
spec:
containers:
- name: echo2
image: hashicorp/http-echo
args:
- "-text=echo2"
ports:
- containerPort: 5678
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: echo-ingress
spec:
rules:
- host: echo1.example.com
http:
paths:
- backend:
serviceName: echo1
servicePort: 80
- host: echo2.example.com
http:
paths:
- backend:
serviceName: echo2
servicePort: 80
I have tested the above in my own cluster and verified that it is working (with different ingress urls of course). Hope this helps!

Related

Azure AKS Application Gateway 502 bad gateway

I have been following the tutorial here:
MS Azure
This is fine. However deploying a local config file I get a "502 Gate Way" error. This config has been fine and works as expected.
Can anyone see anything obvious with this: At this point I don't know where to start.
I am trying to achieve using the ingress controller that is Application gateway. Then add deployments and apply additional ingress rules
apiVersion: apps/v1
kind: Deployment
metadata:
name: one-api
namespace: default
annotations:
imageregistry: "gcr.io/google-samples/hello-app:1.0"
spec:
replicas: 3
selector:
matchLabels:
run: one-api
template:
metadata:
labels:
run: one-api
spec:
containers:
- image: gcr.io/google-samples/hello-app:1.0
imagePullPolicy: IfNotPresent
name: one-api
ports:
- containerPort: 80
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: one-api
namespace: default
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
run: one-api
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: two-api
namespace: default
annotations:
imageregistry: "gcr.io/google-samples/hello-app:1.0"
spec:
replicas: 3
selector:
matchLabels:
run: two-api
template:
metadata:
labels:
run: two-api
spec:
containers:
- image: gcr.io/google-samples/hello-app:1.0
imagePullPolicy: IfNotPresent
name: two-api
ports:
- containerPort: 80
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: two-api
namespace: default
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
run: two-api
type: NodePort
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: strata-2022
labels:
app: my-docker-apps
annotations:
kubernetes.io/ingress.class: azure/application-gateway
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: one-api
port:
number: 80
- path: /two-api
pathType: Prefix
backend:
service:
name: two-api
port:
number: 80
Output of: kubectl describe ingress strata-2022
Name: strata-2022
Labels: app=my-docker-apps
Namespace: default
Address: 51.142.191.83
Ingress Class:
Default backend:
Rules:
Host Path Backends
/ one-api:80 (10.224.0.15:80,10.224.0.59:80,10.224.0.94:80)
/two-api two-api:80 (10.224.0.13:80,10.224.0.51:80,10.224.0.82:80)
Annotations: kubernetes.io/ingress.class: azure/application-gateway
Events:
kubectl describe ingress
Name: strata-2022
Labels: app=my-docker-apps
Namespace: default
Address: 51.142.191.83
Ingress Class: <none>
Default backend: <default>
Rules:
Host Path Backends
---- ---- --------
*
/ one-api:80 (10.224.0.15:80,10.224.0.59:80,10.224.0.94:80)
/two-api two-api:80 (10.224.0.13:80,10.224.0.51:80,10.224.0.82:80)
Annotations: kubernetes.io/ingress.class: azure/application-gateway
Events: <none>
Commands used to create AKS using Azure CLI.
az aks create -n myCluster -g david-tutorial --network-plugin azure --enable-managed-identity -a ingress-appgw --appgw-name testApplicationGateway --appgw-subnet-cidr "10.225.0.0/16" --generate-ssh-keys
// Get credentials and switch to this context
az aks get-credentials -n myCluster -g david-tutorial
// This line is from the tutorial -- this works as expected
//kubectl apply -f https://raw.githubusercontent.com/Azure/application-gateway-kubernetes-ingress/master/docs/examples/aspnetapp.yaml
// This is what i ran. It works locally
kubectl apply -f nano new-deploy.yaml
// Get address
kubectl get ingress
kubectl get configmap
I tried recreating the same setup on my end, and I could identify the following issue right after running the same az aks create command: All the instances in one or more of your backend pools are unhealthy.
Since this appeared to indicate that the backend pools are unreachable, it was strange at first so I tried to look at the logs of one of the pods based on the hello-app images you were using and noticed this right away:
> kubectl logs one-api-77f9b4b9f-6sv6f
2022/08/12 00:22:04 Server listening on port 8080
Hence, my immediate thought was that maybe in the Docker image that you are using, nothing is configured to listen on port 80, which is the port you are using in your kubernetes resources definition.
After updating your Deployment and Service definitions to use port 8080 instead of 80, everything worked perfectly fine and I started getting the following response in my browser:
Hello, world!
Version: 1.0.0
Hostname: one-api-d486fbfd7-pm8kt
Below you can find the updated YAML file that I used to successfully deploy all the resources:
apiVersion: apps/v1
kind: Deployment
metadata:
name: one-api
namespace: default
annotations:
imageregistry: "gcr.io/google-samples/hello-app:1.0"
spec:
replicas: 3
selector:
matchLabels:
run: one-api
template:
metadata:
labels:
run: one-api
spec:
containers:
- image: gcr.io/google-samples/hello-app:1.0
imagePullPolicy: IfNotPresent
name: one-api
ports:
- containerPort: 8080
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: one-api
namespace: default
spec:
ports:
- port: 8080
protocol: TCP
targetPort: 8080
selector:
run: one-api
type: NodePort
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: two-api
namespace: default
annotations:
imageregistry: "gcr.io/google-samples/hello-app:1.0"
spec:
replicas: 3
selector:
matchLabels:
run: two-api
template:
metadata:
labels:
run: two-api
spec:
containers:
- image: gcr.io/google-samples/hello-app:1.0
imagePullPolicy: IfNotPresent
name: two-api
ports:
- containerPort: 8080
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: two-api
namespace: default
spec:
ports:
- port: 8080
protocol: TCP
targetPort: 8080
selector:
run: two-api
type: NodePort
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: strata-2022
labels:
app: my-docker-apps
annotations:
kubernetes.io/ingress.class: azure/application-gateway
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: one-api
port:
number: 8080
- path: /two-api
pathType: Prefix
backend:
service:
name: two-api
port:
number: 8080

GCE ingress resource is taking too long to receive an IP address in GKE cluster. What could be the reason?

I am trying to deploy a sample application on GKE cluster. All the resources are getting created successfully except the ingress resource which is taking around 15-20 minutes to receive an ip address. By this time application times out and get in errored state. The ideal time to assign the IP addr is 2-3 minutes. Can anyone help on the issue how to debug it?
This is happening specific to a cluster while the same ingress getting the ip within 2 minutes in other clusters in GKE.
Below the manifest files I am using to deploy app:
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-deployment
spec:
selector:
matchLabels:
app: hello
template:
metadata:
labels:
app: hello
spec:
containers:
- name: hello
image: gcr.io/google-samples/hello-app:2.0
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: hello-service
spec:
type: ClusterIP
selector:
app: hello
ports:
- port: 8000
targetPort: 8080
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: zone-printer-deployment
spec:
selector:
matchLabels:
app: zone-printer
template:
metadata:
labels:
app: zone-printer
spec:
containers:
- name: zone-printer
image: gcr.io/google-samples/zone-printer:0.2
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: zone-printer-service
spec:
type: ClusterIP
selector:
app: zone-printer
ports:
- port: 9000
targetPort: 8080
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: awesome-ingress
annotations:
kubernetes.io/ingress.class: "gce-internal"
spec:
defaultBackend:
service:
name: hello-service
port:
number: 8000
rules:
- http:
paths:
- path: /*
pathType: ImplementationSpecific
backend:
service:
name: hello-service
port:
number: 8000
- path: /zone-printer
pathType: ImplementationSpecific
backend:
service:
name: zone-printer-service
port:
number: 9000

Expose services via Istio ingress gateway

I am new to istio and I want to expose three services and route traffic to those services based on the port number passed to "website.com:port" or subdomain.
services deployment config files:
apiVersion: v1
kind: Service
metadata:
name: visitor-service
labels:
app: visitor-service
spec:
ports:
- port: 8000
nodePort: 30800
targetPort: 8000
selector:
app: visitor-service
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: visitor-service
spec:
replicas: 1
selector:
matchLabels:
app: visitor-service
template:
metadata:
labels:
app: visitor-service
spec:
containers:
- name: visitor-service
image: visitor-service
ports:
- containerPort: 8000
second service:
apiVersion: v1
kind: Service
metadata:
name: auth-service
labels:
app: auth-service
spec:
ports:
- port: 3004
nodePort: 30304
targetPort: 3004
selector:
app: auth-service
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: auth-service
spec:
replicas: 1
selector:
matchLabels:
app: auth-service
template:
metadata:
labels:
app: auth-service
spec:
containers:
- name: auth-service
image: auth-service
ports:
- containerPort: 3004
Third one:
apiVersion: v1
kind: Service
metadata:
name: gateway
labels:
app: gateway
spec:
ports:
- port: 8080
nodePort: 30808
targetPort: 8080
selector:
app: gateway
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: gateway
spec:
replicas: 1
selector:
matchLabels:
app: gateway
template:
metadata:
labels:
app: gateway
spec:
containers:
- name: gateway
image: gateway
ports:
- containerPort: 8080
If someone can help setting up the gateway and virtual service configuration it would be great.
It seems like you simply want to expose your applications, for that reason istio seems like a total overkill since it comes with a lot of overhead that you won't be using.
Regardless of whether you want to use istio as your default ingress or any other ingress-controller (nginx, traefik, ...) the following construct applies to all of them:
Expose the ingress-controller via a service of type NodePort or LoadBalancer, depending on your infrastructure. In a cloud environment the latter one will most likely work the best for you (if on GKE, AKS, EKS, ...).
Once it is exposed set up a DNS A record to point to the external IP address. Afterwards you can start configuring your ingress, depending on which ingress-controller you chose the following YAML may need some adjustments (example is given for istio):
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: istio
name: ingress
spec:
rules:
- host: httpbin.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
serviceName: httpbin
servicePort: 8000
If a request for something like httpbin.example.com comes in to your ingress-controller it is going to send the request to a service named httpbin on port 8000.
As can be seen in the YAML posted above, the rules and paths field take a list (indicated by the - in the next line). To expose multiple services simply add a new entry to the list, e.g.:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: istio
name: ingress
spec:
rules:
- host: httpbin.example.com
http:
paths:
- path: /httpbin
pathType: Prefix
backend:
serviceName: httpbin
servicePort: 8000
- path: /apache
pathType: Prefix
backend:
serviceName: apache
servicePort: 8080
This is going to send requests like httpbin.example.com/httpbin/ to httpbin and httpbin.example.com/apache/ to apache.
For further information see:
https://istio.io/latest/docs/tasks/traffic-management/ingress/kubernetes-ingress/
https://kubernetes.io/docs/concepts/services-networking/ingress/

Ingress route pointing to the wrong service

I've setup k3s v1.20.4+k3s1 with Klipper Lb and nginx ingress 3.24.0 from the helm charts.
I'm following this article but I'm stumbling upon a very weird issue where my ingress hosts would point to the wrong service.
Here is my configuration:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-routing
spec:
rules:
- host: echo1.stage.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: echo1
port:
number: 80
- host: echo2.stage.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: echo2
port:
number: 80
---
apiVersion: v1
kind: Service
metadata:
name: echo1
spec:
ports:
- port: 80
targetPort: 5678
selector:
app: echo1
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: echo1
spec:
selector:
matchLabels:
app: echo1
replicas: 2
template:
metadata:
labels:
app: echo1
spec:
containers:
- name: echo1
image: hashicorp/http-echo
args:
- "-text=This is echo1"
ports:
- containerPort: 5678
---
apiVersion: v1
kind: Service
metadata:
name: echo2
spec:
ports:
- port: 80
targetPort: 5678
selector:
app: echo2
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: echo2
spec:
selector:
matchLabels:
app: echo2
replicas: 1
template:
metadata:
labels:
app: echo2
spec:
containers:
- name: echo2
image: hashicorp/http-echo
args:
- "-text=This is the new (echo2)"
ports:
- containerPort: 5678
And my Cloudflare DNS records (no DNS proxy activated):
;; A Records
api.stage.example.com. 1 IN A 162.15.166.240
echo1.stage.example.com. 1 IN A 162.15.166.240
echo2.stage.example.com. 1 IN A 162.15.166.240
But when I do a curl on echo1.stage.example.com multiple times, here is what I get:
$ curl echo1.stage.example.com
This is echo1
$ curl echo1.stage.example.com
<html>
<head><title>502 Bad Gateway</title></head>
<body>
<center><h1>502 Bad Gateway</h1></center>
<hr><center>nginx</center>
</body>
</html>
$ curl echo1.stage.example.com
This is the new (echo2)
Sometimes I get a bad gateway, sometimes I get the echo1.stage.example.com domain pointing to the service assigned to echo2.stage.example.com. Is this because of the LB? Or a bad configuration on my end? Thanks!
EDIT: It's not coming from the LB, I just switched to metallb and I still get the same issue
Ok I found the issue. It was actually not related to the config I previously posted by to my kustomization.yaml config where I had:
commonLabels:
app: myapp
Just removing that commonLabels solved the issue.

Why am I getting 502 errors on my ALB end points, targeted at EKS hosted services

I am building a service in EKS that has two deployments, two services (NodePort) , and a single ingress.
I am using the aws-alb-ingress-controller.
When I run kubectl port-forward POD 8080:80 It does show me my working pods.
When I look at the generated endpoints by the alb I get 502 errors.
When I look at the Registered Targets of the target group I am seeing the message, Health checks failed with these codes: [502]
Here is my complete yaml.
---
#Example game deployment and service
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: "example-game"
namespace: "example-app"
spec:
replicas: 5
template:
metadata:
labels:
app: "example-game"
spec:
containers:
- image: alexwhen/docker-2048
imagePullPolicy: Always
name: "example-game"
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: "service-example-game"
namespace: "example-app"
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
type: NodePort
selector:
app: "example-app"
#Example nginxdemo Deployment and Service
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: "example-nginxdemo"
namespace: "example-app"
spec:
replicas: 5
template:
metadata:
labels:
app: "example-nginxdemo"
spec:
containers:
- image: nginxdemos/hello
imagePullPolicy: Always
name: "example-nginxdemo"
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: "service-example-nginxdemo"
namespace: "example-app"
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
type: NodePort
selector:
app: "example-app"
---
#Shared ALB ingress
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: "example-ingress"
namespace: "example-app"
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/healthcheck-protocol: HTTP
Alb.ingress.kubernetes.io/healthcheck-port: traffic-port
alb.ingress.kubernetes.io/healthcheck-path: /
# alb.ingress.kubernetes.io/scheme: internal
# alb.ingress.kubernetes.io/load-balancer-attributes: routing.http2.enabled=true
labels:
app: example-app
spec:
rules:
- http:
paths:
- path: /game/*
backend:
serviceName: "service-example-game"
servicePort: 80
- path: /nginxdemo/*
backend:
serviceName: "service-example-nginxdemo"
servicePort: 80
I don't know why but it turns out that the label given to to ingress has to be unique.
When I changed the label from 'example-app' to 'example-app-ingress' it just started working.