I'm trying to deploy my Apollo Server application to my GKE cluster. However, when I visit the static IP for my site I receive a 502 Bad Gateway error. I was able to get my client to deploy properly in a similar fashion so I'm not sure what I'm doing wrong. My deployment logs seem to show that the server started properly. However my ingress indicates that my service is unhealthy since it seems to be failing the health check.
Here is my deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: <DEPLOYMENT_NAME>
labels:
app: <DEPLOYMENT_NAME>
spec:
replicas: 1
selector:
matchLabels:
app: <POD_NAME>
template:
metadata:
name: <POD_NAME>
labels:
app: <POD_NAME>
spec:
serviceAccountName: <SERVICE_ACCOUNT_NAME>
containers:
- name: <CONTAINER_NAME>
image: <MY_IMAGE>
imagePullPolicy: Always
ports:
- containerPort: <CONTAINER_PORT>
- name: cloud-sql-proxy
image: gcr.io/cloudsql-docker/gce-proxy:1.17
command:
- '/cloud_sql_proxy'
- '-instances=<MY_PROJECT>:<MY_DB_INSTANCE>=tcp:<MY_DB_PORT>'
securityContext:
runAsNonRoot: true
My service.yml
apiVersion: v1
kind: Service
metadata:
name: <MY_SERVICE_NAME>
labels:
app: <MY_SERVICE_NAME>
annotations:
cloud.google.com/neg: '{"ingress": true}'
spec:
type: NodePort
ports:
- protocol: TCP
port: 80
targetPort: <CONTAINER_PORT>
selector:
app: <POD_NAME>
And my ingress.yml
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: <INGRESS_NAME>
annotations:
kubernetes.io/ingress.global-static-ip-name: <CLUSTER_NAME>
networking.gke.io/managed-certificates: <CLUSTER_NAME>
spec:
rules:
- http:
paths:
- path: /*
backend:
serviceName: <SERVICE_NAME>
servicePort: 80
Any ideas what is causing this failure?
With Apollo Server you need the health check to look at the correct endpoint. So add the following to your deployment.yml under the container.
livenessProbe:
initialDelaySeconds: 30
periodSeconds: 30
httpGet:
path: '/.well-known/apollo/server-health'
port: <CONTAINER_PORT>
readinessProbe:
initialDelaySeconds: 30
periodSeconds: 30
httpGet:
path: '/.well-known/apollo/server-health'
port: <CONTAINER_PORT>
I've recently started learning Kubernetes and have run into a problem. I'm trying to deploy 2 pods which run the same docker image, to do that I've mentioned replicas: 2 in the deployment.yaml. I've also mentioned a service as a LoadBalancer with external traffic policy as cluster. I confirmed that there are 2 end points by doing kubectl describe service. The session persistence also was set to none. When I send requests repeatedly to the service port, all of them are routed to only one of the pods, the other one just sits there. How can I make this more efficient? Or any insignts as to what might be going wrong? Here's the deployment.yaml file
EDIT: The solutions mentioned on Only 1 pod handles all requests in Kubernetes cluster didn't work for me.
apiVersion: apps/v1
kind: Deployment
metadata:
name: project-nameprodmstb
labels:
app: project-nameprodmstb
spec:
replicas: 2
selector:
matchLabels:
app: project-nameprodmstb
template:
metadata:
labels:
app: project-nameprodmstb
spec:
containers:
- name: project-nameprodmstb
image: <some_image>
imagePullPolicy: Always
resources:
requests:
cpu: "1024m"
memory: "4096Mi"
imagePullSecrets:
- name: "gcr-json-key"
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
minReadySeconds: 5
---
apiVersion: v1
kind: Service
metadata:
labels:
app: project-nameprodmstb
name: project-nameprodmstb
namespace: development
spec:
ports:
- name: project-nameprodmstb
port: 8006
protocol: TCP
targetPort: 8006
selector:
app: project-nameprodmstb
type: LoadBalancer
I have two backend services deployed on the Google cloud Kubernetes Engine.
a) Backend Service
b) Admin portal which needed to connect with the Backend Service
Everything is available in one cluster.
As in Workload / Pods,
I have three deployments running whereas fitme:9000 is a backend and nginx-1:9000 is an admin portal service
whereas in Services I have
Visualization
Explanation
1. D1 (fitme), D2 (mongo-mongodb), D3 (nginx-1) are three deployments
2. E1D1 (fitme-service), E2D1 (fitme-jr29g), E1D2 (mongo-mongodb), E2D2 (mongo-mongodb-rcwwc) and E1D3 (nginx-1-service) are Services
3. `E1D1, E1D2 and E1D3` are exposed over `Load Balancer` whereas `E2D1 , E2D2` are exposed over `Cluster IP`.
The reason behind it:
D1 needs to access D2 (internally) -> This is perfectly working fine. I am using E2D2 exposed service (cluster IP) to access the D2 deployment inside from D1
Now, D3 needs access to D1 deployment. So, I exposed D1 as an E2D1 service and trying to access it internally by generated Cluster IP of E2D1 but it's giving me request time out.
YAML for fitme-jr29g service
apiVersion: v1
kind: Service
metadata:
creationTimestamp: "2019-12-02T11:18:55Z"
generateName: fitme-
labels:
app: fitme
name: fitme-jr29g
namespace: default
resourceVersion: "486673"
selfLink: /api/v1/namespaces/default/services/fitme-8t7rl
uid: 875045eb-14f5-11ea-823c-42010a8e0047
spec:
clusterIP: 10.35.240.95
ports:
- port: 9000
protocol: TCP
targetPort: 9000
selector:
app: fitme
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
YAML for nginx-1-service service
apiVersion: v1
kind: Service
metadata:
creationTimestamp: "2019-12-02T11:30:10Z"
labels:
app: admin
name: nginx-1-service
namespace: default
resourceVersion: "489972"
selfLink: /api/v1/namespaces/default/services/admin-service
uid: 195b462e-14f7-11ea-823c-42010a8e0047
spec:
clusterIP: 10.35.250.90
externalTrafficPolicy: Cluster
ports:
- nodePort: 30628
port: 8080
protocol: TCP
targetPort: 9000
selector:
app: admin
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer:
ingress:
- ip: 35.227.26.101
YAML for nginx-1 deployment
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
creationTimestamp: "2019-12-02T11:24:09Z"
generation: 2
labels:
app: admin
name: admin
namespace: default
resourceVersion: "489624"
selfLink: /apis/apps/v1/namespaces/default/deployments/admin
uid: 426792e6-14f6-11ea-823c-42010a8e0047
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: admin
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: admin
spec:
containers:
- image: gcr.io/docker-226818/admin#sha256:602fe6b7e43d53251eebe2f29968bebbd756336c809cb1cd43787027537a5c8b
imagePullPolicy: IfNotPresent
name: admin-sha256
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status:
availableReplicas: 1
conditions:
- lastTransitionTime: "2019-12-02T11:24:18Z"
lastUpdateTime: "2019-12-02T11:24:18Z"
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
- lastTransitionTime: "2019-12-02T11:24:09Z"
lastUpdateTime: "2019-12-02T11:24:18Z"
message: ReplicaSet "admin-8d55dfbb6" has successfully progressed.
reason: NewReplicaSetAvailable
status: "True"
type: Progressing
observedGeneration: 2
readyReplicas: 1
replicas: 1
updatedReplicas: 1
YAML for fitme-service
apiVersion: v1
kind: Service
metadata:
creationTimestamp: "2019-12-02T13:38:21Z"
generateName: fitme-
labels:
app: fitme
name: fitme-service
namespace: default
resourceVersion: "525173"
selfLink: /api/v1/namespaces/default/services/drogo-mzcgr
uid: 01e8fc39-1509-11ea-823c-42010a8e0047
spec:
clusterIP: 10.35.240.74
externalTrafficPolicy: Cluster
ports:
- nodePort: 31016
port: 80
protocol: TCP
targetPort: 9000
selector:
app: fitme
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer:
ingress:
- ip: 35.236.110.230
YAML for fitme deployment
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
creationTimestamp: "2019-12-02T13:34:54Z"
generation: 2
labels:
app: fitme
name: fitme
namespace: default
resourceVersion: "525571"
selfLink: /apis/apps/v1/namespaces/default/deployments/drogo
uid: 865a5a8a-1508-11ea-823c-42010a8e0047
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: drogo
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: fitme
spec:
containers:
- image: gcr.io/fitme-226818/drogo#sha256:ab49a4b12e7a14f9428a5720bbfd1808eb9667855cb874e973c386a4e9b59d40
imagePullPolicy: IfNotPresent
name: fitme-sha256
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status:
availableReplicas: 1
conditions:
- lastTransitionTime: "2019-12-02T13:34:57Z"
lastUpdateTime: "2019-12-02T13:34:57Z"
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
- lastTransitionTime: "2019-12-02T13:34:54Z"
lastUpdateTime: "2019-12-02T13:34:57Z"
message: ReplicaSet "drogo-5c7f449668" has successfully progressed.
reason: NewReplicaSetAvailable
status: "True"
type: Progressing
observedGeneration: 2
readyReplicas: 1
replicas: 1
updatedReplicas: 1
I am accessing fitme-jr29g by putting 10.35.240.95:9000 ip address in
nginx-1 deployment container.
The deployment object can, and often, should have network properties to expose the applications within the pods.
Pods are networking cappable objects, with virtual ethernet interfaces, needed to receive incoming traffic.
On the other hand, services are completely network oriented objects, meant mostly to relay network traffic into the pods.
You can think of that as pods (grouped in deployments) as backend and services as load balancers. At the end, both need network capabilities.
In your scenario, I'm not sure how are you exposing your deployment via load balancer since its pods doesn't seem to have any open ports.
Since the services exposing your pods are targeting port 9000, you can add it to the pod template in your deployment:
spec:
containers:
- image: gcr.io/fitme-xxxxxxx
name: fitme-sha256
ports:
- containerPort: 9000
Be sure that it matches the port where your container is actually receiving the incoming requests.
I have a two tier application.
The frontend calls the webapi layer through simple http rest call
http://mywebapi:5000/
My working docker compose code is below and the application works
version: '3'
services:
webfrontend:
image: webfrontend
build: ./nodeexpress-alibaba-ci-tutorial
ports:
- "3000:3000"
networks:
- my-shared-network
mywebapi:
image: mywebapi
build: ./dotnetcorewebapi-alibaba-ci-tutorial
ports:
- "5000:5000"
networks:
- my-shared-network
networks:
my-shared-network: {}
Now I'm trying to get this to work on kubernetes.
I have created two deployments and two services-loadbalancer for webfrontend and clusterip for mywebapi
But, after deploying, I find that the data from mywebapi is not reaching the frontend. I can view the frontend on the browser through the load balancer public ip.
Mywebapi deployment yaml
apiVersion: apps/v1beta2
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: '3'
creationTimestamp: '2019-09-28T13:31:32Z'
generation: 3
labels:
app: mywebapi
tier: backend
name: mywebapi
namespace: default
resourceVersion: '1047268388'
selfLink: /apis/apps/v1beta2/namespaces/default/deployments/mywebapi
uid: 493ab5e0-e1f4-11e9-9a64-d63fe9981162
spec:
progressDeadlineSeconds: 600
replicas: 2
revisionHistoryLimit: 10
selector:
matchLabels:
app: mywebapi
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
annotations:
aliyun.kubernetes.io/deploy-timestamp: '2019-09-28T14:36:01Z'
labels:
app: mywebapi
spec:
containers:
- image: >-
registry-intl-vpc.ap-southeast-1.aliyuncs.com/devopsci-t/mywebapi:1.0
imagePullPolicy: Always
name: mywebapi
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status:
availableReplicas: 2
conditions:
- lastTransitionTime: '2019-09-28T14:51:18Z'
lastUpdateTime: '2019-09-28T14:51:18Z'
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: 'True'
type: Available
- lastTransitionTime: '2019-09-28T14:49:55Z'
lastUpdateTime: '2019-09-28T14:51:19Z'
message: ReplicaSet "mywebapi-84cf98fb4f" has successfully progressed.
reason: NewReplicaSetAvailable
status: 'True'
type: Progressing
observedGeneration: 3
readyReplicas: 2
replicas: 2
updatedReplicas: 2
Mywebapi service yaml
apiVersion: v1
kind: Service
metadata:
creationTimestamp: '2019-09-28T13:31:33Z'
name: mywebapi-svc
namespace: default
resourceVersion: '1047557879'
selfLink: /api/v1/namespaces/default/services/mywebapi-svc
uid: 49e21207-e1f4-11e9-9a64-d63fe9981162
spec:
clusterIP: None
ports:
- name: mywebapiport
port: 5000
protocol: TCP
targetPort: 5000
selector:
app: mywebapi
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
I even tried updating the http rest call url to http://mywebapi-svc:5000/ but still does not work.
In the webfrontend pod logs I find the below error
Got error: getaddrinfo ENOTFOUND mywebapi-svc mywebapi-svc:5000
Sincerely appreciate all help
Thanks
............................................................
Update..
Changed mywebapi-svc to disable headless. Current YAML is below. Problem still the same..
apiVersion: v1
kind: Service
metadata:
creationTimestamp: '2019-09-29T15:21:44Z'
name: mywebapi-svc
namespace: default
resourceVersion: '667545270'
selfLink: /api/v1/namespaces/default/services/mywebapi-svc
uid: d84503ee-e2cc-11e9-93ec-a65f0b53b1fa
spec:
clusterIP: 172.19.0.74
ports:
- name: mywebapiport
port: 5000
protocol: TCP
targetPort: 5000
selector:
app: mywebapi-default
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
It's because you're using headless service
I guess you don't need a headless service for this job. Because DNS server won't return a single IP address if there are more than one pod match by the label selector. Remove the spec.ClusterIP field from your service.
apiVersion: v1
kind: Service
metadata:
name: mywebapi-svc
namespace: default
spec:
# clusterIP: None <-- remove
ports:
- name: mywebapiport
port: 5000
protocol: TCP
targetPort: 5000
selector:
app: mywebapi
type: ClusterIP
Now you can call at http://service-name.namespace-name.svc:port endpoint from your front-end. It should work.
In case your webapi is stateful, then you should use statefulset instead of deployment.
Environment:
I have:
1- NGINX Ingress controller version: 1.15.9, image: 0.23.0
2- Kubernetes version:
Client Version: version.Info{Major:"1", Minor:"13",
GitVersion:"v1.13.4",
GitCommit:"c27b913fddd1a6c480c229191a087698aa92f0b1",
GitTreeState:"clean", BuildDate:"2019-02-28T13:37:52Z",
GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"13",
GitVersion:"v1.13.4",
GitCommit:"c27b913fddd1a6c480c229191a087698aa92f0b1",
GitTreeState:"clean", BuildDate:"2019-02-28T13:30:26Z",
GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}
Cloud provider or hardware configuration: Virtual Machines on KVM
OS (e.g. from /etc/os-release):
NAME="CentOS Linux" VERSION="7 (Core)" ID="centos" ID_LIKE="rhel
fedora" VERSION_ID="7" PRETTY_NAME="CentOS Linux 7 (Core)"
ANSI_COLOR="0;31" CPE_NAME="cpe:/o:centos:centos:7"
HOME_URL="https://www.centos.org/"
BUG_REPORT_URL="https://bugs.centos.org/"
CENTOS_MANTISBT_PROJECT="CentOS-7" CENTOS_MANTISBT_PROJECT_VERSION="7"
REDHAT_SUPPORT_PRODUCT="centos" REDHAT_SUPPORT_PRODUCT_VERSION="7"
Kernel (e.g. uname -a):
Linux node01 3.10.0-957.5.1.el7.x86_64 #1 SMP Fri Feb 1 14:54:57 UTC
2019 x86_64 x86_64 x86_64 GNU/Linux
Install tools: kubeadm
More details:
CNI : WEAVE
Setup:
2 Resilient HA Proxy, 3 Masters, 2 infra, and worker nodes.
I am exposing all the services as node ports, where the HA-Proxy re-assign them to a public virtual IP.
Dedicated project hosted on the infra node carrying the monitoring and logging tools (Grafana, Prometheus, EFK, etc)
Backend NFS storage as persistent storage
What happened:
I want to be able to use external Name rather than node ports, so instead of accessing grafana for instance via vip + 3000 I want to access it via http://grafana.wild-card-dns-zone
Deployment
I have created a new namespace called ingress
I deployed it as follow:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx-ingress-controller
namespace: ingress
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
replicas: **2**
selector:
matchLabels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
template:
metadata:
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
annotations:
prometheus.io/port: "10254"
prometheus.io/scrape: "true"
name: nginx-ingress
spec:
serviceAccountName: nginx-ingress-serviceaccount
nodeSelector:
node-role.kubernetes.io/infra: infra
terminationGracePeriodSeconds: 60
containers:
- name: nginx-ingress-controller
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.23.0
readinessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 10
livenessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 10
args:
- /nginx-ingress-controller
- --default-backend-service=ingress/ingress-controller-nginx-ingress-default-backend
- --configmap=$(POD_NAMESPACE)/nginx-configuration
- --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
- --udp-services-configmap=$(POD_NAMESPACE)/udp-services
- --publish-service=$(POD_NAMESPACE)/ingress-nginx
- --annotations-prefix=nginx.ingress.kubernetes.io
- --v3
securityContext:
allowPrivilegeEscalation: true
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
# www-data -> 33
runAsUser: 33
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
ports:
- name: http
containerPort: 80
- name: https
containerPort: 443
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
generation: 1
labels:
app: nginx-ingress
chart: nginx-ingress-1.3.1
component: default-backend
name: ingress-controller-nginx-ingress-default-backend
namespace: ingress
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: nginx-ingress
component: default-backend
release: ingress-controller
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: nginx-ingress
component: default-backend
release: ingress-controller
spec:
nodeSelector:
node-role.kubernetes.io/infra: infra
containers:
- image: k8s.gcr.io/defaultbackend:1.4
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 30
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
name: nginx-ingress-default-backend
ports:
- containerPort: 8080
name: http
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 60
---
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-configuration
namespace: ingress
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tcp-services
namespace: ingress
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
kind: ConfigMap
apiVersion: v1
metadata:
name: udp-services
namespace: ingress
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
apiVersion: v1
kind: Service
metadata:
name: ingress-nginx
namespace: ingress
spec:
type: NodePort
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
- port: 443
targetPort: 443
protocol: TCP
name: https
selector:
name: ingress-nginx
---
apiVersion: v1
kind: Service
metadata:
labels:
app: nginx-ingress
chart: nginx-ingress-1.3.1
component: default-backend
name: ingress-controller-nginx-ingress-default-backend
namespace: ingress
spec:
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
selector:
app: nginx-ingress
component: default-backend
release: ingress-controller
sessionAffinity: None
type: ClusterIP
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: nginx-ingress-serviceaccount
namespace: ingress
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: nginx-ingress-clusterrole
namespace: ingress
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
rules:
- apiGroups:
- ""
resources:
- configmaps
- endpoints
- nodes
- pods
- secrets
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- apiGroups:
- ""
resources:
- services
verbs:
- get
- list
- watch
- apiGroups:
- "extensions"
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- apiGroups:
- "extensions"
resources:
- ingresses/status
verbs:
- update
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
name: nginx-ingress-role
namespace: ingress
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
rules:
- apiGroups:
- ""
resources:
- configmaps
- pods
- secrets
- namespaces
verbs:
- get
- apiGroups:
- ""
resources:
- configmaps
resourceNames:
# Defaults to "<election-id>-<ingress-class>"
# Here: "<ingress-controller-leader>-<nginx>"
# This has to be adapted if you change either parameter
# when launching the nginx-ingress-controller.
- "ingress-controller-leader-nginx"
verbs:
- get
- update
- apiGroups:
- ""
resources:
- configmaps
verbs:
- create
- apiGroups:
- ""
resources:
- endpoints
verbs:
- get
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: nginx-ingress-clusterrolebinding
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: nginx-ingress-clusterrole
subjects:
- kind: ServiceAccount
name: nginx-ingress-serviceaccount
namespace: ingress
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: nginx-ingress-rolebinding
namespace: ingress
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: nginx-ingress-role
subjects:
- kind: ServiceAccount
name: nginx-ingress-serviceaccount
namespace: ingress
INGRESS SETUP:
services
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
kind: Service
metadata:
creationTimestamp: "2019-03-25T16:03:01Z"
labels:
app: jaeger
app.kubernetes.io/component: query
app.kubernetes.io/instance: jeager
app.kubernetes.io/managed-by: jaeger-operator
app.kubernetes.io/name: jeager-query
app.kubernetes.io/part-of: jaeger
name: jeager-query
namespace: monitoring-logging
resourceVersion: "3055947"
selfLink: /api/v1/namespaces/monitoring-logging/services/jeager-query
uid: 778550f0-4f17-11e9-9078-001a4a16021e
spec:
externalName: jaeger.example.com
ports:
- port: 16686
protocol: TCP
targetPort: 16686
selector:
app: jaeger
app.kubernetes.io/component: query
app.kubernetes.io/instance: jeager
app.kubernetes.io/managed-by: jaeger-operator
app.kubernetes.io/name: jeager-query
app.kubernetes.io/part-of: jaeger
sessionAffinity: None
type: ExternalName
status:
loadBalancer: {}
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
kind: Service
metadata:
creationTimestamp: "2019-03-25T15:40:30Z"
labels:
app: grafana
chart: grafana-2.2.4
heritage: Tiller
release: grafana
name: grafana
namespace: monitoring-logging
resourceVersion: "3053698"
selfLink: /api/v1/namespaces/monitoring-logging/services/grafana
uid: 51b9d878-4f14-11e9-9078-001a4a16021e
spec:
externalName: grafana.example.com
ports:
- name: http
port: 3000
protocol: TCP
targetPort: 3000
selector:
app: grafana
release: grafana
sessionAffinity: None
type: ExternalName
status:
loadBalancer: {}
INGRESS
Ingress 1
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
ingress.kubernetes.io/service-upstream: "true"
creationTimestamp: "2019-03-25T21:13:56Z"
generation: 1
labels:
app: jaeger
app.kubernetes.io/component: query-ingress
app.kubernetes.io/instance: jeager
app.kubernetes.io/managed-by: jaeger-operator
app.kubernetes.io/name: jeager-query
app.kubernetes.io/part-of: jaeger
name: jaeger-query
namespace: monitoring-logging
resourceVersion: "3111683"
selfLink: /apis/extensions/v1beta1/namespaces/monitoring-logging/ingresses/jaeger-query
uid: e6347f6b-4f42-11e9-9e8e-001a4a16021c
spec:
rules:
- host: jaeger.example.com
http:
paths:
- backend:
serviceName: jeager-query
servicePort: 16686
status:
loadBalancer: {}
Ingress 2
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{},"labels":{"app":"grafana"},"name":"grafana","namespace":"monitoring-logging"},"spec":{"rules":[{"host":"grafana.example.com","http":{"paths":[{"backend":{"serviceName":"grafana","servicePort":3000}}]}}]}}
creationTimestamp: "2019-03-25T17:52:40Z"
generation: 1
labels:
app: grafana
name: grafana
namespace: monitoring-logging
resourceVersion: "3071719"
selfLink: /apis/extensions/v1beta1/namespaces/monitoring-logging/ingresses/grafana
uid: c89d7f34-4f26-11e9-8c10-001a4a16021d
spec:
rules:
- host: grafana.example.com
http:
paths:
- backend:
serviceName: grafana
servicePort: 3000
status:
loadBalancer: {}
EndPoints
Endpoint 1
apiVersion: v1
kind: Endpoints
metadata:
creationTimestamp: "2019-03-25T15:40:30Z"
labels:
app: grafana
chart: grafana-2.2.4
heritage: Tiller
release: grafana
name: grafana
namespace: monitoring-logging
resourceVersion: "3050562"
selfLink: /api/v1/namespaces/monitoring-logging/endpoints/grafana
uid: 51bb1f9c-4f14-11e9-9e8e-001a4a16021c
subsets:
- addresses:
- ip: 10.42.0.15
nodeName: kuinfra01.example.com
targetRef:
kind: Pod
name: grafana-b44b4f867-bcq2x
namespace: monitoring-logging
resourceVersion: "1386975"
uid: 433e3d21-4827-11e9-9e8e-001a4a16021c
ports:
- name: http
port: 3000
protocol: TCP
Endpoint 2
apiVersion: v1
kind: Endpoints
metadata:
creationTimestamp: "2019-03-25T16:03:01Z"
labels:
app: jaeger
app.kubernetes.io/component: service-query
app.kubernetes.io/instance: jeager
app.kubernetes.io/managed-by: jaeger-operator
app.kubernetes.io/name: jeager-query
app.kubernetes.io/part-of: jaeger
name: jeager-query
namespace: monitoring-logging
resourceVersion: "3114702"
selfLink: /api/v1/namespaces/monitoring-logging/endpoints/jeager-query
uid: 7786d833-4f17-11e9-9e8e-001a4a16021c
subsets:
- addresses:
- ip: 10.35.0.3
nodeName: kunode02.example.com
targetRef:
kind: Pod
name: jeager-query-7d9775d8f7-2hwdn
namespace: monitoring-logging
resourceVersion: "3114693"
uid: fdac9771-4f49-11e9-9e8e-001a4a16021c
ports:
- name: query
port: 16686
protocol: TCP
I am able to curl the endpoints from inside the ingress-controller pod:
# kubectl exec -it nginx-ingress-controller-5dd67f88cc-z2g8s -n ingress -- /bin/bash
www-data#nginx-ingress-controller-5dd67f88cc-z2g8s:/etc/nginx$ curl -k https://localhost
Found.
www-data#nginx-ingress-controller-5dd67f88cc-z2g8s:/etc/nginx$ curl http://localhost
<html>
<head><title>308 Permanent Redirect</title></head>
<body>
<center><h1>308 Permanent Redirect</h1></center>
<hr><center>nginx/1.15.9</center>
</body>
</html>
www-data#nginx-ingress-controller-5dd67f88cc-z2g8s:/etc/nginx$ exit
But from out side when I am trying to reach jaeger.example.com or grafana.example.com I am getting 502 bad gatway and the following error log:
10.39.0.0 - [10.39.0.0] - - [25/Mar/2019:16:40:32 +0000] "GET /search HTTP/1.1" 502 559 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.86 Safari/537.36" 514 0.001 [monitoring-logging-jeager-query-16686] vip:16686, vip:16686, vip:16686 0, 0, 0 0.001, 0.000, 0.000 502, 502, 502 b7c813286fccf27fffa03eb6564edfd1
2019/03/25 16:40:32 [error] 2816#2816: *4617326 connect() failed (111: Connection refused) while connecting to upstream, client: 10.39.0.0, server: _, request: "GET /favicon.ico HTTP/1.1", upstream: "http://vip:16686/favicon.ico", host: "jeager.example.com", referrer: "http://jeager.example.com/search"
2019/03/25 16:40:32 [error] 2816#2816: *4617326 connect() failed (111: Connection refused) while connecting to upstream, client: 10.39.0.0, server: _, request: "GET /favicon.ico HTTP/1.1", upstream: "http://vip:16686/favicon.ico", host: "jeager.example.com", referrer: "http://jeager.example.com/search"
2019/03/25 16:40:32 [error] 2816#2816: *4617326 connect() failed (111: Connection refused) while connecting to upstream, client: 10.39.0.0, server: _, request: "GET /favicon.ico HTTP/1.1", upstream: "http://vip:16686/favicon.ico", host: "jeager.example.com", referrer: "http://jeager.example.com/search"
10.39.0.0 - [10.39.0.0] - - [25/Mar/2019:16:40:32 +0000] "GET /favicon.ico HTTP/1.1" 502 559 "http://jeager.example.com/search" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.86 Safari/537.36" 494 0.001 [monitoring-logging-jeager-query-16686] vip:16686, vip:16686, vip:16686 0, 0, 0 0.000, 0.001, 0.000 502, 502, 502 9e582912614e67dfee6be1f679de5933
I0325 16:40:32.497868 8 socket.go:225] skiping metric for host jeager.example.com that is not being served
I0325 16:40:32.497886 8 socket.go:225] skiping metric for host jeager.example.com that is not being served
First thanks for cookiedough for the clue to help regarding the service issue, but later I faced an issue to create service using external name but I found my mistake thanks for "Long" user in the slack, the mistake is that I was using service of type ExternalName and it should be type cluster IP here are the steps to solve the problems (Remark https issue is a separate problem):
1- Create wild character DNS zone pointing the public IP
1- For new service just create it of type ClusterIP
2- In the namespace for the service create an ingress using the following example (yaml):
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
labels:
app: grafana
name: grafana
namespace: grafana-namespace
spec:
rules:
host: grafana.example.com
http:
paths:
backend:
serviceName: grafana
servicePort: 3000
3- kubectl -f apply -f grafana-ingress.yaml
Now you can reach your grafana on http://grafana.example,com