Kubernetes pod cluster ip not responding? - kubernetes

I have two backend services deployed on the Google cloud Kubernetes Engine.
a) Backend Service
b) Admin portal which needed to connect with the Backend Service
Everything is available in one cluster.
As in Workload / Pods,
I have three deployments running whereas fitme:9000 is a backend and nginx-1:9000 is an admin portal service
whereas in Services I have
Visualization
Explanation
1. D1 (fitme), D2 (mongo-mongodb), D3 (nginx-1) are three deployments
2. E1D1 (fitme-service), E2D1 (fitme-jr29g), E1D2 (mongo-mongodb), E2D2 (mongo-mongodb-rcwwc) and E1D3 (nginx-1-service) are Services
3. `E1D1, E1D2 and E1D3` are exposed over `Load Balancer` whereas `E2D1 , E2D2` are exposed over `Cluster IP`.
The reason behind it:
D1 needs to access D2 (internally) -> This is perfectly working fine. I am using E2D2 exposed service (cluster IP) to access the D2 deployment inside from D1
Now, D3 needs access to D1 deployment. So, I exposed D1 as an E2D1 service and trying to access it internally by generated Cluster IP of E2D1 but it's giving me request time out.
YAML for fitme-jr29g service
apiVersion: v1
kind: Service
metadata:
creationTimestamp: "2019-12-02T11:18:55Z"
generateName: fitme-
labels:
app: fitme
name: fitme-jr29g
namespace: default
resourceVersion: "486673"
selfLink: /api/v1/namespaces/default/services/fitme-8t7rl
uid: 875045eb-14f5-11ea-823c-42010a8e0047
spec:
clusterIP: 10.35.240.95
ports:
- port: 9000
protocol: TCP
targetPort: 9000
selector:
app: fitme
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
YAML for nginx-1-service service
apiVersion: v1
kind: Service
metadata:
creationTimestamp: "2019-12-02T11:30:10Z"
labels:
app: admin
name: nginx-1-service
namespace: default
resourceVersion: "489972"
selfLink: /api/v1/namespaces/default/services/admin-service
uid: 195b462e-14f7-11ea-823c-42010a8e0047
spec:
clusterIP: 10.35.250.90
externalTrafficPolicy: Cluster
ports:
- nodePort: 30628
port: 8080
protocol: TCP
targetPort: 9000
selector:
app: admin
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer:
ingress:
- ip: 35.227.26.101
YAML for nginx-1 deployment
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
creationTimestamp: "2019-12-02T11:24:09Z"
generation: 2
labels:
app: admin
name: admin
namespace: default
resourceVersion: "489624"
selfLink: /apis/apps/v1/namespaces/default/deployments/admin
uid: 426792e6-14f6-11ea-823c-42010a8e0047
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: admin
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: admin
spec:
containers:
- image: gcr.io/docker-226818/admin#sha256:602fe6b7e43d53251eebe2f29968bebbd756336c809cb1cd43787027537a5c8b
imagePullPolicy: IfNotPresent
name: admin-sha256
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status:
availableReplicas: 1
conditions:
- lastTransitionTime: "2019-12-02T11:24:18Z"
lastUpdateTime: "2019-12-02T11:24:18Z"
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
- lastTransitionTime: "2019-12-02T11:24:09Z"
lastUpdateTime: "2019-12-02T11:24:18Z"
message: ReplicaSet "admin-8d55dfbb6" has successfully progressed.
reason: NewReplicaSetAvailable
status: "True"
type: Progressing
observedGeneration: 2
readyReplicas: 1
replicas: 1
updatedReplicas: 1
YAML for fitme-service
apiVersion: v1
kind: Service
metadata:
creationTimestamp: "2019-12-02T13:38:21Z"
generateName: fitme-
labels:
app: fitme
name: fitme-service
namespace: default
resourceVersion: "525173"
selfLink: /api/v1/namespaces/default/services/drogo-mzcgr
uid: 01e8fc39-1509-11ea-823c-42010a8e0047
spec:
clusterIP: 10.35.240.74
externalTrafficPolicy: Cluster
ports:
- nodePort: 31016
port: 80
protocol: TCP
targetPort: 9000
selector:
app: fitme
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer:
ingress:
- ip: 35.236.110.230
YAML for fitme deployment
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
creationTimestamp: "2019-12-02T13:34:54Z"
generation: 2
labels:
app: fitme
name: fitme
namespace: default
resourceVersion: "525571"
selfLink: /apis/apps/v1/namespaces/default/deployments/drogo
uid: 865a5a8a-1508-11ea-823c-42010a8e0047
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: drogo
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: fitme
spec:
containers:
- image: gcr.io/fitme-226818/drogo#sha256:ab49a4b12e7a14f9428a5720bbfd1808eb9667855cb874e973c386a4e9b59d40
imagePullPolicy: IfNotPresent
name: fitme-sha256
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status:
availableReplicas: 1
conditions:
- lastTransitionTime: "2019-12-02T13:34:57Z"
lastUpdateTime: "2019-12-02T13:34:57Z"
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
- lastTransitionTime: "2019-12-02T13:34:54Z"
lastUpdateTime: "2019-12-02T13:34:57Z"
message: ReplicaSet "drogo-5c7f449668" has successfully progressed.
reason: NewReplicaSetAvailable
status: "True"
type: Progressing
observedGeneration: 2
readyReplicas: 1
replicas: 1
updatedReplicas: 1
I am accessing fitme-jr29g by putting 10.35.240.95:9000 ip address in
nginx-1 deployment container.

The deployment object can, and often, should have network properties to expose the applications within the pods.
Pods are networking cappable objects, with virtual ethernet interfaces, needed to receive incoming traffic.
On the other hand, services are completely network oriented objects, meant mostly to relay network traffic into the pods.
You can think of that as pods (grouped in deployments) as backend and services as load balancers. At the end, both need network capabilities.
In your scenario, I'm not sure how are you exposing your deployment via load balancer since its pods doesn't seem to have any open ports.
Since the services exposing your pods are targeting port 9000, you can add it to the pod template in your deployment:
spec:
containers:
- image: gcr.io/fitme-xxxxxxx
name: fitme-sha256
ports:
- containerPort: 9000
Be sure that it matches the port where your container is actually receiving the incoming requests.

Related

Kubernetes Service does not have active Endpoint

I created a Deployment, Service and an Ingress. Unfortunately, the ingress-nginx-controller pods are complaining that my Service does not have an Active Endpoint:
controller.go:920] Service "<namespace>/web-server" does not have any active Endpoint.
My Service definition:
apiVersion: v1
kind: Service
metadata:
annotations:
prometheus.io/should_be_scraped: "false"
creationTimestamp: "2021-06-22T07:07:18Z"
labels:
chart: <namespace>-core-1.9.2
release: <namespace>
name: web-server
namespace: <namespace>
resourceVersion: "9050796"
selfLink: /api/v1/namespaces/<namespace>/services/web-server
uid: 82b3c3b4-a181-4ba2-887a-a4498346bc81
spec:
clusterIP: 10.233.56.52
ports:
- name: http
port: 80
protocol: TCP
targetPort: 8080
selector:
app: web-server
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
My Deployment definition:
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
creationTimestamp: "2021-06-22T07:07:19Z"
generation: 1
labels:
app: web-server
chart: <namespace>-core-1.9.2
release: <namespace>
name: web-server
namespace: <namespace>
resourceVersion: "9051062"
selfLink: /apis/apps/v1/namespaces/<namespace>/deployments/web-server
uid: fb085727-9e8a-4931-8067-fd4ed410b8ca
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: web-server
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: web-server
spec:
containers:
- env:
<removed environment variables>
image: <url>/<namespace>/web-server:1.10.1
imagePullPolicy: IfNotPresent
name: web-server
ports:
- containerPort: 8080
name: http
protocol: TCP
- containerPort: 8082
name: metrics
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /actuator/health
port: 8080
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 5
successThreshold: 1
timeoutSeconds: 1
resources:
limits:
memory: 1Gi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /config
name: <namespace>-config
dnsPolicy: ClusterFirst
hostAliases:
- hostnames:
- <url>
ip: 10.0.1.178
imagePullSecrets:
- name: registry-pull-secret
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
volumes:
- configMap:
defaultMode: 420
name: <namespace>-config
name: <namespace>-config
status:
conditions:
- lastTransitionTime: "2021-06-22T07:07:19Z"
lastUpdateTime: "2021-06-22T07:07:19Z"
message: Deployment does not have minimum availability.
reason: MinimumReplicasUnavailable
status: "False"
type: Available
- lastTransitionTime: "2021-06-22T07:17:20Z"
lastUpdateTime: "2021-06-22T07:17:20Z"
message: ReplicaSet "web-server-6df6d6565b" has timed out progressing.
reason: ProgressDeadlineExceeded
status: "False"
type: Progressing
observedGeneration: 1
replicas: 1
unavailableReplicas: 1
updatedReplicas: 1
In the same namespace, I have more Service and Deployment resources, all of them work, except this one (+ another, see below).
# kubectl get endpoints -n <namespace>
NAME ENDPOINTS AGE
activemq 10.233.64.3:61613,10.233.64.3:8161,10.233.64.3:61616 + 1 more... 26d
content-backend 10.233.96.17:8080 26d
datastore3 10.233.96.16:8080 26d
web-server 74m
web-server-metrics 26d
As you can see, the selector/label are the same (web-server) in the Service as well as in the Deployment definition.
C-Nan has solved the problem, and has posted a solution as a comment:
I found the issue. The Pod was started, but not in Ready state due to a failing readinessProbe. I wasn't aware that an endpoint wouldn't be created until the Pod is in Ready state. Removing the readinessProbe created the Endpoint.
From the status of your Deployment, it seems that no pod is running for the Deployment.
status:
conditions:
- lastTransitionTime: "2021-06-22T07:07:19Z"
lastUpdateTime: "2021-06-22T07:07:19Z"
message: Deployment does not have minimum availability.
reason: MinimumReplicasUnavailable
status: "False"
type: Available
- lastTransitionTime: "2021-06-22T07:17:20Z"
lastUpdateTime: "2021-06-22T07:17:20Z"
message: ReplicaSet "web-server-6df6d6565b" has timed out progressing.
reason: ProgressDeadlineExceeded
status: "False"
type: Progressing
observedGeneration: 1
replicas: 1
unavailableReplicas: 1
updatedReplicas: 1
The unavailableReplicas: 1 filed is indicating that the desired pod is not available. As a result the Service has no active endpoint.
You can describe the deployment to see why the pod is unavailable.

Load balancing not happening in Kubernetes

I've recently started learning Kubernetes and have run into a problem. I'm trying to deploy 2 pods which run the same docker image, to do that I've mentioned replicas: 2 in the deployment.yaml. I've also mentioned a service as a LoadBalancer with external traffic policy as cluster. I confirmed that there are 2 end points by doing kubectl describe service. The session persistence also was set to none. When I send requests repeatedly to the service port, all of them are routed to only one of the pods, the other one just sits there. How can I make this more efficient? Or any insignts as to what might be going wrong? Here's the deployment.yaml file
EDIT: The solutions mentioned on Only 1 pod handles all requests in Kubernetes cluster didn't work for me.
apiVersion: apps/v1
kind: Deployment
metadata:
name: project-nameprodmstb
labels:
app: project-nameprodmstb
spec:
replicas: 2
selector:
matchLabels:
app: project-nameprodmstb
template:
metadata:
labels:
app: project-nameprodmstb
spec:
containers:
- name: project-nameprodmstb
image: <some_image>
imagePullPolicy: Always
resources:
requests:
cpu: "1024m"
memory: "4096Mi"
imagePullSecrets:
- name: "gcr-json-key"
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
minReadySeconds: 5
---
apiVersion: v1
kind: Service
metadata:
labels:
app: project-nameprodmstb
name: project-nameprodmstb
namespace: development
spec:
ports:
- name: project-nameprodmstb
port: 8006
protocol: TCP
targetPort: 8006
selector:
app: project-nameprodmstb
type: LoadBalancer

Connect back-end with front-end over rest api call running on kubernetes

I have a two tier application.
The frontend calls the webapi layer through simple http rest call
http://mywebapi:5000/
My working docker compose code is below and the application works
version: '3'
services:
webfrontend:
image: webfrontend
build: ./nodeexpress-alibaba-ci-tutorial
ports:
- "3000:3000"
networks:
- my-shared-network
mywebapi:
image: mywebapi
build: ./dotnetcorewebapi-alibaba-ci-tutorial
ports:
- "5000:5000"
networks:
- my-shared-network
networks:
my-shared-network: {}
Now I'm trying to get this to work on kubernetes.
I have created two deployments and two services-loadbalancer for webfrontend and clusterip for mywebapi
But, after deploying, I find that the data from mywebapi is not reaching the frontend. I can view the frontend on the browser through the load balancer public ip.
Mywebapi deployment yaml
apiVersion: apps/v1beta2
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: '3'
creationTimestamp: '2019-09-28T13:31:32Z'
generation: 3
labels:
app: mywebapi
tier: backend
name: mywebapi
namespace: default
resourceVersion: '1047268388'
selfLink: /apis/apps/v1beta2/namespaces/default/deployments/mywebapi
uid: 493ab5e0-e1f4-11e9-9a64-d63fe9981162
spec:
progressDeadlineSeconds: 600
replicas: 2
revisionHistoryLimit: 10
selector:
matchLabels:
app: mywebapi
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
annotations:
aliyun.kubernetes.io/deploy-timestamp: '2019-09-28T14:36:01Z'
labels:
app: mywebapi
spec:
containers:
- image: >-
registry-intl-vpc.ap-southeast-1.aliyuncs.com/devopsci-t/mywebapi:1.0
imagePullPolicy: Always
name: mywebapi
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status:
availableReplicas: 2
conditions:
- lastTransitionTime: '2019-09-28T14:51:18Z'
lastUpdateTime: '2019-09-28T14:51:18Z'
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: 'True'
type: Available
- lastTransitionTime: '2019-09-28T14:49:55Z'
lastUpdateTime: '2019-09-28T14:51:19Z'
message: ReplicaSet "mywebapi-84cf98fb4f" has successfully progressed.
reason: NewReplicaSetAvailable
status: 'True'
type: Progressing
observedGeneration: 3
readyReplicas: 2
replicas: 2
updatedReplicas: 2
Mywebapi service yaml
apiVersion: v1
kind: Service
metadata:
creationTimestamp: '2019-09-28T13:31:33Z'
name: mywebapi-svc
namespace: default
resourceVersion: '1047557879'
selfLink: /api/v1/namespaces/default/services/mywebapi-svc
uid: 49e21207-e1f4-11e9-9a64-d63fe9981162
spec:
clusterIP: None
ports:
- name: mywebapiport
port: 5000
protocol: TCP
targetPort: 5000
selector:
app: mywebapi
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
I even tried updating the http rest call url to http://mywebapi-svc:5000/ but still does not work.
In the webfrontend pod logs I find the below error
Got error: getaddrinfo ENOTFOUND mywebapi-svc mywebapi-svc:5000
Sincerely appreciate all help
Thanks
............................................................
Update..
Changed mywebapi-svc to disable headless. Current YAML is below. Problem still the same..
apiVersion: v1
kind: Service
metadata:
creationTimestamp: '2019-09-29T15:21:44Z'
name: mywebapi-svc
namespace: default
resourceVersion: '667545270'
selfLink: /api/v1/namespaces/default/services/mywebapi-svc
uid: d84503ee-e2cc-11e9-93ec-a65f0b53b1fa
spec:
clusterIP: 172.19.0.74
ports:
- name: mywebapiport
port: 5000
protocol: TCP
targetPort: 5000
selector:
app: mywebapi-default
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
It's because you're using headless service
I guess you don't need a headless service for this job. Because DNS server won't return a single IP address if there are more than one pod match by the label selector. Remove the spec.ClusterIP field from your service.
apiVersion: v1
kind: Service
metadata:
name: mywebapi-svc
namespace: default
spec:
# clusterIP: None <-- remove
ports:
- name: mywebapiport
port: 5000
protocol: TCP
targetPort: 5000
selector:
app: mywebapi
type: ClusterIP
Now you can call at http://service-name.namespace-name.svc:port endpoint from your front-end. It should work.
In case your webapi is stateful, then you should use statefulset instead of deployment.

Kubernetes on Rancher - Ingress Path Issue

I have created a nodejs/express application with an ingress controller which is running on Kubernetes with Rancher. Only the default backend works perfectly and I can reach any of my routes I created with express, e.g. http://1.2.3.4/api or http://1.2.3.4/api/districts.
What is not working is the paths that I defined, e.g. http://1.2.3.4/gg1/api or http://1.2.3.4/gg2/api/districts. In the Pod logs I can see a 404 whenever I make a request on one of the paths that I defined.
I have been looking for hours for a solution now without success. I saw that a lot of people seem to have problems with ingress and paths for different reasons, but I couldn't find the solution for my problem yet. Does maybe someone here know the solution for this problem?
To get ingression to work I used this example here: Using Kubernetes Ingress Controller from scratch
These is how I deploy everything:
kubectl create -f deployment1-config.yaml
kubectl create -f deployment2-config.yaml
kubectl expose deployment test-ingress-node-1 --target-port=5000 --type=NodePort
kubectl expose deployment test-ingress-node-2 --target-port=5000 --type=NodePort
kubectl run nginx --image=nginx --port=80
kubectl expose deployment nginx --target-port=80 --type=NodePort
kubectl create -f ingress.yaml
All requests only reach the pod of the default backend, and paths with /gg1 or /gg2 only give 404 in pod logging or Cannot GET /gg1 in browser.
ingress-config.yaml:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-node-adv-ingress
annotations:
kubernetes.io/ingress.class: "rancher"
ingress.kubernetes.io/rewrite-target: /
spec:
backend:
serviceName: test-ingress-node-1
servicePort: 5000
rules:
- host:
http:
paths:
- path: /gg1
backend:
serviceName: test-ingress-node-1
servicePort: 5000
- path: /gg2
backend:
serviceName: test-ingress-node-2
servicePort: 5000
deployment-1.config.yaml:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
generation: 2
labels:
run: test-ingress-node-1
name: test-ingress-node-1
namespace: default
resourceVersion: "123456"
selfLink: /apis/extensions/v1beta1/namespaces/default/deployments/test-ingress-node-1
spec:
replicas: 1
selector:
matchLabels:
run: test-ingress-node-1
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
run: test-ingress-node-1
spec:
containers:
- image: myProject-service-latest
imagePullPolicy: Always
name: test-ingress-node-1
ports:
- containerPort: 5000
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
dnsPolicy: ClusterFirst
restartPolicy: Always
securityContext: {}
terminationGracePeriodSeconds: 30
status:
availableReplicas: 1
observedGeneration: 2
replicas: 1
updatedReplicas: 1
deployment-2.config.yaml:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
generation: 2
labels:
run: test-ingress-node-2
name: test-ingress-node-2
namespace: default
resourceVersion: "123456"
selfLink: /apis/extensions/v1beta1/namespaces/default/deployments/test-ingress-node-2
spec:
replicas: 1
selector:
matchLabels:
run: test-ingress-node-2
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
run: test-ingress-node-2
spec:
containers:
- image: myProject-service-latest
imagePullPolicy: Always
name: test-ingress-node-2
ports:
- containerPort: 5000
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
dnsPolicy: ClusterFirst
restartPolicy: Always
securityContext: {}
terminationGracePeriodSeconds: 30
status:
availableReplicas: 1
observedGeneration: 2
replicas: 1
updatedReplicas: 1
The correct path is the same that node js resource endpoint! Review the ingress Kubernetes documentation..
"Both the host and path must match the content of an incoming request before the loadbalancer directs traffic to the backend."
https://kubernetes.io/docs/concepts/services-networking/ingress/
Try to replace the line ingress.kubernetes.io/rewrite-target: / under annotations in the ingress-config.yaml file with this line:
nginx.ingress.kubernetes.io/rewrite-target: /

How do I run traefik behind Kubernetes on Google Container Engine?

So I have Traefik "running" on Kubernetes:
apiVersion: v1
kind: Service
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: '{"kind":"Service","apiVersion":"v1","metadata":{"name":"traefik","namespace":"kube-system","creationTimestamp":null,"labels":{"k8s-app":"traefik-ingress-lb"}},"spec":{"ports":[{"name":"http","port":80,"targetPort":80},{"name":"https","port":443,"targetPort":443}],"selector":{"k8s-app":"traefik-ingress-lb"},"type":"LoadBalancer"},"status":{"loadBalancer":{}}}'
creationTimestamp: 2016-11-30T23:15:49Z
labels:
k8s-app: traefik-ingress-lb
name: traefik
namespace: kube-system
resourceVersion: "9672"
selfLink: /api/v1/namespaces/kube-system/services/traefik
uid: ee07b957-b752-11e6-88fa-42010af00083
spec:
clusterIP: 10.11.251.200
ports:
- name: http
nodePort: 30297
port: 80
protocol: TCP
targetPort: 80
- name: https
nodePort: 30247
port: 443
protocol: TCP
targetPort: 443
selector:
k8s-app: traefik-ingress-lb
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer:
ingress:
- ip: # IP THAT IS ALLOCATED BY k8s BUT NOT ASSIGNED
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
kubectl.kubernetes.io/last-applied-configuration: '###'
creationTimestamp: 2016-11-30T22:59:07Z
generation: 2
labels:
k8s-app: traefik-ingress-lb
name: traefik-ingress-controller
namespace: kube-system
resourceVersion: "23438"
selfLink: /apis/extensions/v1beta1/namespaces/kube-system/deployments/traefik-ingress-controller
uid: 9919ff46-b750-11e6-88fa-42010af00083
spec:
replicas: 1
selector:
matchLabels:
k8s-app: traefik-ingress-lb
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
k8s-app: traefik-ingress-lb
name: traefik-ingress-lb
version: v1.1
spec:
containers:
- args:
- --web
- --kubernetes
- --configFile=/etc/config/traefik.toml
- --logLevel=DEBUG
image: gcr.io/myproject/traefik
imagePullPolicy: Always
name: traefik-ingress-lb
ports:
- containerPort: 80
hostPort: 80
protocol: TCP
- containerPort: 8080
protocol: TCP
resources:
limits:
cpu: 200m
memory: 30Mi
requests:
cpu: 100m
memory: 20Mi
terminationMessagePath: /dev/termination-log
volumeMounts:
- mountPath: /etc/config
name: config-volume
- mountPath: /etc/traefik
name: traefik-volume
dnsPolicy: ClusterFirst
restartPolicy: Always
securityContext: {}
terminationGracePeriodSeconds: 60
volumes:
- configMap:
defaultMode: 420
name: traefik-config
name: config-volume
- emptyDir: {}
name: traefik-volume
status:
observedGeneration: 2
replicas: 1
unavailableReplicas: 1
updatedReplicas: 1
My problem is that the external IP assigned by Kubernetes does not actually forward to Traefik; in fact, the IP assigned does not even show up in my Google Cloud Platform console. How do I get Traefik working with a load-balancer on Google Container Engine?
If you do
kubectl describe svc/traefik
Do the endpoints match the IPs from:
kubectl get po -lk8s-app=traefik-ingress-lb -o wide
What happens when you do hit the LB IP? Does it load indefinitely, or do you get a different service?
There was a new fork of Traefik that supposedly fixes some kubernetes issues. However, I did further reading and talking about Traefik and uncovered some (recent) advisories of potential instability. While this is to be expected in new software, I decided to switch to NGINX to handle my reverse proxy. It's been working wonderfully, so I'm going to go ahead and close this question.