How do I run traefik behind Kubernetes on Google Container Engine? - kubernetes

So I have Traefik "running" on Kubernetes:
apiVersion: v1
kind: Service
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: '{"kind":"Service","apiVersion":"v1","metadata":{"name":"traefik","namespace":"kube-system","creationTimestamp":null,"labels":{"k8s-app":"traefik-ingress-lb"}},"spec":{"ports":[{"name":"http","port":80,"targetPort":80},{"name":"https","port":443,"targetPort":443}],"selector":{"k8s-app":"traefik-ingress-lb"},"type":"LoadBalancer"},"status":{"loadBalancer":{}}}'
creationTimestamp: 2016-11-30T23:15:49Z
labels:
k8s-app: traefik-ingress-lb
name: traefik
namespace: kube-system
resourceVersion: "9672"
selfLink: /api/v1/namespaces/kube-system/services/traefik
uid: ee07b957-b752-11e6-88fa-42010af00083
spec:
clusterIP: 10.11.251.200
ports:
- name: http
nodePort: 30297
port: 80
protocol: TCP
targetPort: 80
- name: https
nodePort: 30247
port: 443
protocol: TCP
targetPort: 443
selector:
k8s-app: traefik-ingress-lb
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer:
ingress:
- ip: # IP THAT IS ALLOCATED BY k8s BUT NOT ASSIGNED
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
kubectl.kubernetes.io/last-applied-configuration: '###'
creationTimestamp: 2016-11-30T22:59:07Z
generation: 2
labels:
k8s-app: traefik-ingress-lb
name: traefik-ingress-controller
namespace: kube-system
resourceVersion: "23438"
selfLink: /apis/extensions/v1beta1/namespaces/kube-system/deployments/traefik-ingress-controller
uid: 9919ff46-b750-11e6-88fa-42010af00083
spec:
replicas: 1
selector:
matchLabels:
k8s-app: traefik-ingress-lb
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
k8s-app: traefik-ingress-lb
name: traefik-ingress-lb
version: v1.1
spec:
containers:
- args:
- --web
- --kubernetes
- --configFile=/etc/config/traefik.toml
- --logLevel=DEBUG
image: gcr.io/myproject/traefik
imagePullPolicy: Always
name: traefik-ingress-lb
ports:
- containerPort: 80
hostPort: 80
protocol: TCP
- containerPort: 8080
protocol: TCP
resources:
limits:
cpu: 200m
memory: 30Mi
requests:
cpu: 100m
memory: 20Mi
terminationMessagePath: /dev/termination-log
volumeMounts:
- mountPath: /etc/config
name: config-volume
- mountPath: /etc/traefik
name: traefik-volume
dnsPolicy: ClusterFirst
restartPolicy: Always
securityContext: {}
terminationGracePeriodSeconds: 60
volumes:
- configMap:
defaultMode: 420
name: traefik-config
name: config-volume
- emptyDir: {}
name: traefik-volume
status:
observedGeneration: 2
replicas: 1
unavailableReplicas: 1
updatedReplicas: 1
My problem is that the external IP assigned by Kubernetes does not actually forward to Traefik; in fact, the IP assigned does not even show up in my Google Cloud Platform console. How do I get Traefik working with a load-balancer on Google Container Engine?

If you do
kubectl describe svc/traefik
Do the endpoints match the IPs from:
kubectl get po -lk8s-app=traefik-ingress-lb -o wide
What happens when you do hit the LB IP? Does it load indefinitely, or do you get a different service?

There was a new fork of Traefik that supposedly fixes some kubernetes issues. However, I did further reading and talking about Traefik and uncovered some (recent) advisories of potential instability. While this is to be expected in new software, I decided to switch to NGINX to handle my reverse proxy. It's been working wonderfully, so I'm going to go ahead and close this question.

Related

GKE | Statefulset gets deleted along with a service when deployed in the kube-system namespace

I have a GKE cluster of 5 nodes in the same zone. I'm trying to deploy an Elasticsearch statefulset of 3 nodes on the kube-system namespace, but every time I do the statefulset gets deleted and the pods get into the Terminating state immediately after the creation of the second pod.
I tried to check the pod logs and to describe the pod for any information but found nothing useful.
I even checked the GKE cluster logs where I detected the deletion request log but with no extra information of who is initiating it or why is it happening.
When I changed the namespace to default everything was fine and the pods were in the ready state.
Below is the manifest file I'm using for this deployment.
# RBAC authn and authz
apiVersion: v1
kind: ServiceAccount
metadata:
name: elasticsearch-logging
namespace: kube-system
labels:
k8s-app: elasticsearch-logging
kubernetes.io/cluster-service: "true"
# addonmanager.kubernetes.io/mode: Reconcile
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: elasticsearch-logging
labels:
k8s-app: elasticsearch-logging
kubernetes.io/cluster-service: "true"
# addonmanager.kubernetes.io/mode: Reconcile
rules:
- apiGroups:
- ""
resources:
- "services"
- "namespaces"
- "endpoints"
verbs:
- "get"
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: kube-system
name: elasticsearch-logging
labels:
k8s-app: elasticsearch-logging
kubernetes.io/cluster-service: "true"
# addonmanager.kubernetes.io/mode: Reconcile
subjects:
- kind: ServiceAccount
name: elasticsearch-logging
namespace: kube-system
apiGroup: ""
roleRef:
kind: ClusterRole
name: elasticsearch-logging
apiGroup: ""
---
# Elasticsearch deployment itself
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: elasticsearch-logging
namespace: kube-system
labels:
k8s-app: elasticsearch-logging
version: 7.16.2
kubernetes.io/cluster-service: "true"
# addonmanager.kubernetes.io/mode: Reconcile
spec:
serviceName: elasticsearch-logging
replicas: 2
updateStrategy:
type: RollingUpdate
selector:
matchLabels:
k8s-app: elasticsearch-logging
version: 7.16.2
template:
metadata:
labels:
k8s-app: elasticsearch-logging
version: 7.16.2
kubernetes.io/cluster-service: "true"
spec:
serviceAccountName: elasticsearch-logging
containers:
- image: docker.elastic.co/elasticsearch/elasticsearch:7.16.2
name: elasticsearch-logging
resources:
# need more cpu upon initialization, therefore burstable class
limits:
cpu: 1000m
requests:
cpu: 100m
ports:
- containerPort: 9200
name: db
protocol: TCP
- containerPort: 9300
name: transport
protocol: TCP
volumeMounts:
- name: elasticsearch-logging
mountPath: /data
env:
#Added by Nour
- name: discovery.seed_hosts
value: elasticsearch-master-headless
- name: "NAMESPACE"
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumes:
- name: elasticsearch-logging
# emptyDir: {}
# Elasticsearch requires vm.max_map_count to be at least 262144.
# If your OS already sets up this number to a higher value, feel free
# to remove this init container.
initContainers:
- image: alpine:3.6
command: ["/sbin/sysctl", "-w", "vm.max_map_count=262144"]
name: elasticsearch-logging-init
securityContext:
privileged: true
volumeClaimTemplates:
- metadata:
name: elasticsearch-logging
spec:
storageClassName: "standard"
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 30Gi
---
apiVersion: v1
kind: Service
metadata:
name: elasticsearch-logging
namespace: kube-system
labels:
k8s-app: elasticsearch-logging
kubernetes.io/cluster-service: "true"
# addonmanager.kubernetes.io/mode: Reconcile
kubernetes.io/name: "Elasticsearch"
spec:
type: NodePort
ports:
- port: 9200
protocol: TCP
targetPort: db
nodePort: 31335
selector:
k8s-app: elasticsearch-logging
#Added by Nour
---
apiVersion: v1
kind: Service
metadata:
labels:
app: elasticsearch-master
name: elasticsearch-master
namespace: kube-system
spec:
ports:
- name: http
port: 9200
protocol: TCP
targetPort: 9200
- name: transport
port: 9300
protocol: TCP
targetPort: 9300
selector:
app: elasticsearch-master
sessionAffinity: None
type: ClusterIP
---
apiVersion: v1
kind: Service
metadata:
labels:
app: elasticsearch-master
name: elasticsearch-master-headless
namespace: kube-system
spec:
ports:
- name: http
port: 9200
protocol: TCP
targetPort: 9200
- name: transport
port: 9300
protocol: TCP
targetPort: 9300
clusterIP: None
selector:
app: elasticsearch-master
Below are the available namespaces
$ kubectl get ns
NAME STATUS AGE
default Active 4d15h
kube-node-lease Active 4d15h
kube-public Active 4d15h
kube-system Active 4d15h
Am I using any old API version that might cause the issue?
Thank you.
To close i think it would make sense to paste the final answer here.
I understand your curiousity, i guess GCP just started preventing people from deploying stuff to the kube-system namespaces as it has the risk of messing with GKE. I never tried to deploy stuff to the kube-system namespace before so i'm sure if it was always like this or we just changed it
Overall i recommend avoiding deploying stuff into the kube-system namespace in GKE```

Kubernetes Traefik ingress rule for local host

I am trying to connect to traefik dashboard to localhost. My manifest below will bring up the localhost:port but I only get 404 errors. Now sure how to set the ingress to work locally. The base code was set up to run on AWS NLB, I am trying to setup this to run locally. This manifest below contains the deployment and service for the traefik Kubernetes install.
apiVersion: v1
kind: Namespace
metadata:
name: traefik
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: traefik-ingress-lb
namespace: kube-system
labels:
k8s-app: traefik-ingress-lb
spec:
replicas: 1
selector:
matchLabels:
k8s-app: traefik-ingress-lb
template:
metadata:
labels:
k8s-app: traefik-ingress-lb
name: traefik-ingress-lb
spec:
# hostNetwork: true
serviceAccountName: traefik-ingress-lb
terminationGracePeriodSeconds: 60
containers:
- image: traefik:v2.5
imagePullPolicy: IfNotPresent
name: traefik-ingress-lb
args:
- --serversTransport.insecureSkipVerify=true
- --providers.kubernetesingress=true
- --providers.kubernetescrd
- --entryPoints.traefik.address=:1080
- --entryPoints.https.address=:443
- --entrypoints.https.http.tls=true
- --entryPoints.https.forwardedHeaders.insecure=true
- --entryPoints.https2.address=:4443
- --entrypoints.https2.http.tls=true
- --entryPoints.https2.forwardedHeaders.insecure=true
- --entryPoints.turn.address=:5349
- --entrypoints.turn.http.tls=true
- --entryPoints.turn.forwardedHeaders.insecure=true
- --api
- --api.insecure
- --accesslog
- --log.level=INFO
- --pilot.dashboard=false
- --entryPoints.http.address=:80
- --entrypoints.http.http.redirections.entryPoint.to=https
- --entrypoints.http.http.redirections.entryPoint.scheme=https
- --entrypoints.http.http.redirections.entrypoint.permanent=true
resources:
limits:
memory: 3072Mi
cpu: 1.5
requests:
memory: 1024Mi
cpu: 1
---
apiVersion: v1
kind: Service
metadata:
name: lb
namespace: kube-system
annotations:
spec:
type: LoadBalancer
externalTrafficPolicy: Local
selector:
k8s-app: traefik-ingress-lb
ports:
- name: http
port: 80
targetPort: 80
- name: https
port: 443
targetPort: 443
- name: https2
port: 4443
targetPort: 4443
- name: turn
port: 5349
targetPort: 5349
---
apiVersion: v1
kind: Service
metadata:
name: dashboard
namespace: traefik
spec:
selector:
k8s-app: traefik-ingress-lb
ports:
- name: dashboard
port: 1080
targetPort: 1080
---
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: traefik
namespace: kube-system
# annotations:
# ingress.kubernetes.io/whitelist-x-forwarded-for: "true"
spec:
entryPoints:
- https
# - web
routes:
- kind: Rule
match: "PathPrefix(`/api`) || PathPrefix(`/dashboard`)"
# middlewares:
# - name: internal-ip-whitelist
# namespace: traefik
services:
- kind: Service
name: dashboard
namespace: kube-system
passHostHeader: true
port: 1080
,,,
Your IngressRoute points to the dashboard service in namespace 'kube-system', but the traefik dashboard service is deployed in namespace 'traefik'.
Therefore the route is not working, leading to the 404 in traefik.

GKE Kubernetes LoadBalancer returns connection reset by peer

i have encountered a strange problem with my cluster
in my cluster i have a deployment and a Loadbalancer service exposing this deployment
it worked like a charm but suddenly the Loadbalancer started to return an error:
curl: (56) Recv failure: Connection reset by peer
the error is showing while the pod and the loadbalancer are running and have no errors in their logs
what i already tried:
deleting the pod
redeploying service + deployment from scratch
but the issue persist
my service yaml:
apiVersion: v1
kind: Service
metadata:
annotations:
cloud.google.com/neg: '{"ingress":true}'
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app.kubernetes.io/instance":"RELEASE-NAME","app.kubernetes.io/name":"APP-NAME","app.kubernetes.io/version":"latest"},"name":"APP-NAME","namespace":"namespacex"},"spec":{"ports":[{"name":"web","port":3000}],"selector":{"app.kubernetes.io/instance":"RELEASE-NAME","app.kubernetes.io/name":"APP-NAME"},"type":"LoadBalancer"}}
creationTimestamp: "2021-08-03T07:55:00Z"
finalizers:
- service.kubernetes.io/load-balancer-cleanup
labels:
app.kubernetes.io/instance: RELEASE-NAME
app.kubernetes.io/name: APP-NAME
app.kubernetes.io/version: latest
name: APP-NAME
namespace: namespacex
resourceVersion: "14583904"
uid: 7fb4d7e6-4316-44e5-8f9b-7a466bc776da
spec:
clusterIP: 10.4.18.36
clusterIPs:
- 10.4.18.36
externalTrafficPolicy: Cluster
ports:
- name: web
nodePort: 30970
port: 3000
protocol: TCP
targetPort: 3000
selector:
app.kubernetes.io/instance: RELEASE-NAME
app.kubernetes.io/name: APP-NAME
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer:
ingress:
- ip: xx.xxx.xxx.xxx
my deployment yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: APP-NAME
labels:
app.kubernetes.io/name: APP-NAME
app.kubernetes.io/instance: RELEASE-NAME
app.kubernetes.io/version: "latest"
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: APP-NAME
app.kubernetes.io/instance: RELEASE-NAME
template:
metadata:
annotations:
checksum/config: 5e6ff0d6fa64b90b0365e9f3939cefc0a619502b32564c4ff712067dbe44ab90
checksum/secret: 76e0a1351da90c0cef06851e3aa9e7c80b415c29b11f473d4a2520ade9c892ce
labels:
app.kubernetes.io/name: APP-NAME
app.kubernetes.io/instance: RELEASE-NAME
spec:
serviceAccountName: APP-NAME
containers:
- name: APP-NAME
image: 'docker.io/xxxxxxxx:latest'
imagePullPolicy: "Always"
ports:
- name: http
containerPort: 3000
livenessProbe:
httpGet:
path: /balancer/
port: http
readinessProbe:
httpGet:
path: /balancer/
port: http
env:
...
volumeMounts:
- name: config-volume
mountPath: /home/app/config/
resources:
limits:
cpu: 400m
memory: 256Mi
requests:
cpu: 400m
memory: 256Mi
volumes:
- name: config-volume
configMap:
name: app-config
imagePullSecrets:
- name: secret
The issue in my case turned to be a network component (like a FW) blocking the outbound connection after dimming the cluster 'unsafe' for no apparent reason
so in essence it was not a K8s issue but an IT one

Connect back-end with front-end over rest api call running on kubernetes

I have a two tier application.
The frontend calls the webapi layer through simple http rest call
http://mywebapi:5000/
My working docker compose code is below and the application works
version: '3'
services:
webfrontend:
image: webfrontend
build: ./nodeexpress-alibaba-ci-tutorial
ports:
- "3000:3000"
networks:
- my-shared-network
mywebapi:
image: mywebapi
build: ./dotnetcorewebapi-alibaba-ci-tutorial
ports:
- "5000:5000"
networks:
- my-shared-network
networks:
my-shared-network: {}
Now I'm trying to get this to work on kubernetes.
I have created two deployments and two services-loadbalancer for webfrontend and clusterip for mywebapi
But, after deploying, I find that the data from mywebapi is not reaching the frontend. I can view the frontend on the browser through the load balancer public ip.
Mywebapi deployment yaml
apiVersion: apps/v1beta2
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: '3'
creationTimestamp: '2019-09-28T13:31:32Z'
generation: 3
labels:
app: mywebapi
tier: backend
name: mywebapi
namespace: default
resourceVersion: '1047268388'
selfLink: /apis/apps/v1beta2/namespaces/default/deployments/mywebapi
uid: 493ab5e0-e1f4-11e9-9a64-d63fe9981162
spec:
progressDeadlineSeconds: 600
replicas: 2
revisionHistoryLimit: 10
selector:
matchLabels:
app: mywebapi
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
annotations:
aliyun.kubernetes.io/deploy-timestamp: '2019-09-28T14:36:01Z'
labels:
app: mywebapi
spec:
containers:
- image: >-
registry-intl-vpc.ap-southeast-1.aliyuncs.com/devopsci-t/mywebapi:1.0
imagePullPolicy: Always
name: mywebapi
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status:
availableReplicas: 2
conditions:
- lastTransitionTime: '2019-09-28T14:51:18Z'
lastUpdateTime: '2019-09-28T14:51:18Z'
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: 'True'
type: Available
- lastTransitionTime: '2019-09-28T14:49:55Z'
lastUpdateTime: '2019-09-28T14:51:19Z'
message: ReplicaSet "mywebapi-84cf98fb4f" has successfully progressed.
reason: NewReplicaSetAvailable
status: 'True'
type: Progressing
observedGeneration: 3
readyReplicas: 2
replicas: 2
updatedReplicas: 2
Mywebapi service yaml
apiVersion: v1
kind: Service
metadata:
creationTimestamp: '2019-09-28T13:31:33Z'
name: mywebapi-svc
namespace: default
resourceVersion: '1047557879'
selfLink: /api/v1/namespaces/default/services/mywebapi-svc
uid: 49e21207-e1f4-11e9-9a64-d63fe9981162
spec:
clusterIP: None
ports:
- name: mywebapiport
port: 5000
protocol: TCP
targetPort: 5000
selector:
app: mywebapi
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
I even tried updating the http rest call url to http://mywebapi-svc:5000/ but still does not work.
In the webfrontend pod logs I find the below error
Got error: getaddrinfo ENOTFOUND mywebapi-svc mywebapi-svc:5000
Sincerely appreciate all help
Thanks
............................................................
Update..
Changed mywebapi-svc to disable headless. Current YAML is below. Problem still the same..
apiVersion: v1
kind: Service
metadata:
creationTimestamp: '2019-09-29T15:21:44Z'
name: mywebapi-svc
namespace: default
resourceVersion: '667545270'
selfLink: /api/v1/namespaces/default/services/mywebapi-svc
uid: d84503ee-e2cc-11e9-93ec-a65f0b53b1fa
spec:
clusterIP: 172.19.0.74
ports:
- name: mywebapiport
port: 5000
protocol: TCP
targetPort: 5000
selector:
app: mywebapi-default
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
It's because you're using headless service
I guess you don't need a headless service for this job. Because DNS server won't return a single IP address if there are more than one pod match by the label selector. Remove the spec.ClusterIP field from your service.
apiVersion: v1
kind: Service
metadata:
name: mywebapi-svc
namespace: default
spec:
# clusterIP: None <-- remove
ports:
- name: mywebapiport
port: 5000
protocol: TCP
targetPort: 5000
selector:
app: mywebapi
type: ClusterIP
Now you can call at http://service-name.namespace-name.svc:port endpoint from your front-end. It should work.
In case your webapi is stateful, then you should use statefulset instead of deployment.

Cannot acces Kubernetes service outside cluster

I have created a Kubernetes service for my deployment and using a load balancer an external IP has been assigned along with the node port but I am unable to access the service from outside the cluster using the external IP and nodeport.
The service has been properly created and is up and running.
Below is my deployment:-
apiVersion: apps/v1
kind: Deployment
metadata:
name: dev-portal
labels:
app: dev-portal
spec:
replicas: 1
selector:
matchLabels:
app: dev-portal
template:
metadata:
labels:
app: dev-portal
spec:
containers:
- name: dev-portal
image: bhavesh/ti-portal:develop
imagePullPolicy: Always
resources:
requests:
memory: "512Mi"
cpu: "1"
limits:
memory: "1G"
cpu: "1"
ports:
- containerPort: 9000
imagePullSecrets:
- name: regcred
---
apiVersion: v1
kind: Service
metadata:
name: dev-portal
labels:
app: dev-portal
spec:
selector:
app: dev-portal
ports:
- protocol: TCP
port: 9000
targetPort: 9000
nodePort: 30429
type: LoadBalancer
For some reason, I am unable to access my service from outside and a message 'Refused to connect' is shown.
Update
The service is described using kubectl describe below:
Name: trakinvest-dev-portal
Namespace: default
Labels: app=trakinvest-dev-portal
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"trakinvest-dev-portal"},"name":"trakinvest-dev-portal","...
Selector: app=trakinvest-dev-portal
Type: LoadBalancer
IP: 10.245.185.62
LoadBalancer Ingress: 139.59.54.108
Port: <unset> 9000/TCP
TargetPort: 9000/TCP
NodePort: <unset> 30429/TCP
Endpoints: <none>
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>