I have hangfire dashboard working properly on local environment. Now, I'm trying to setup it as a container (pod) inside my cluster, deployed to azure so I can access hangfire dashboard through its url. However, I'm having issue access to it.
Below is my setup:
[UsedImplicitly]
public void Configure(IApplicationBuilder app)
{
var hangFireServerOptions = new BackgroundJobServerOptions
{
Activator = new ContainerJobActivator(app.ApplicationServices)
};
app.UseHealthChecks("/liveness");
app.UseHangfireServer(hangFireServerOptions);
app.UseHangfireDashboard("/hangfire", new DashboardOptions()
{
AppPath = null,
DashboardTitle = "Hangfire Dashboard",
Authorization = new[]
{
new HangfireCustomBasicAuthenticationFilter
{
User = Configuration.GetSection("HangfireCredentials:UserName").Value,
Pass = Configuration.GetSection("HangfireCredentials:Password").Value
}
}
});
app.UseHttpsRedirection();
app.UseRouting();
app.UseAuthorization();
app.UseEndpoints(endpoints =>
{
endpoints.MapControllers();
});
HangfireJobScheduler.ScheduleJobs(app.ApplicationServices.GetServices<IScheduledTask>()); //serviceProvider.GetServices<IScheduledTask>()
}
Service.yml
apiVersion: v1
kind: Service
metadata:
name: task-scheduler-api
spec:
ports:
- port: 80
targetPort: 80
name: http
- port: 443
targetPort: 443
name: https
selector:
app: task-scheduler-api
Deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: task-scheduler
spec:
selector:
matchLabels:
app: task-scheduler
template:
metadata:
labels:
app: task-scheduler
spec:
containers:
- name: task-scheduler
image: <%image-name%>
# Resources and limit
resources:
requests:
cpu: <%cpu_request%>
memory: <%memory_request%>
limits:
cpu: <%cpu_limit%>
memory: <%memory_limit%>
ports:
- name: http
containerPort: 80
- name: https
containerPort: 443
readinessProbe:
httpGet:
path: /liveness
port: 80
initialDelaySeconds: 3
periodSeconds: 5
timeoutSeconds: 30
livenessProbe:
httpGet:
path: /liveness
port: 80
initialDelaySeconds: 15
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 7
Ingress.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: "nginx"
cert-manager.io/cluster-issuer: letsencrypt
nginx.ingress.kubernetes.io/rewrite-target: /
name: task-scheulder-api-ingress
namespace: default
spec:
rules:
- host: example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: task-scheduler-api
port:
number: 80
tls:
- hosts:
- example.com
secretName: task-scheduler-tls-production
I'm trying to access the dashboard by running: example.com/hangfire, but got 503 Service Temporarily Unavailable.
I'm checking logs on the pod. Every seem to be fine:
...
...
Content root path: /data
Now listening on: http://0.0.0.0:80
Now listening on: https://0.0.0.0:443
Application started. Press Ctrl+C to shut down.
....
Would anyone know what I'm missing and how to resolve it ? Thank you
I have figured out the issue. The main issue is that I did not have the match value for selector app in deployment.yml and service.yml. If I do kubectl get ep, it is showing me that I do not have any endpoint assign to the task scheduler pod, meaning that it is not really deployed yet.
As soon as I updated values in the deployment.yml and service.yml, url is accessible.
service.yml
apiVersion: v1
kind: Service
metadata:
name: task-scheduler-api
spec:
ports:
- port: 80
targetPort: 80
name: http
- port: 443
targetPort: 443
name: https
selector:
app: task-scheduler-api # This needs to match with value inside the deployment
deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: task-scheduler
spec:
selector:
matchLabels:
app: task-scheduler # This needs to match with value in service
template:
metadata:
labels:
app: task-scheduler # This needs to match as well
spec:
containers:
- name: task-scheduler
image: <%image-name%>
# Resources and limit
resources:
requests:
cpu: <%cpu_request%>
memory: <%memory_request%>
limits:
cpu: <%cpu_limit%>
memory: <%memory_limit%>
ports:
- name: http
containerPort: 80
- name: https
containerPort: 443
readinessProbe:
httpGet:
path: /liveness
port: 80
initialDelaySeconds: 3
periodSeconds: 5
timeoutSeconds: 30
livenessProbe:
httpGet:
path: /liveness
port: 80
initialDelaySeconds: 15
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 7
Hopefully someone would find it useful. Thank you
This could be related to the ingress class, bcs this moved from annotation to an own field in networking.k8s.io/v1 :
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: task-scheulder-api-ingress
annotations:
cert-manager.io/cluster-issuer: letsencrypt
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
ingressClassName: nginx
tls:
- hosts:
- "example.com"
secretName: task-scheduler-tls-production
rules:
- host: "example.com"
http:
paths:
- path: /hangfire
pathType: ImplementationSpecific
backend:
service:
name: task-scheduler-api
port:
number: 8080
You also do not need to specify port 80 & 443 at the service as the ingress is responsible for implementing TLS:
apiVersion: v1
kind: Service
metadata:
name: task-scheduler-api
spec:
ports:
- port: 8080
targetPort: http
protocol: TCP
name: http
selector:
app: task-scheduler-api
For convenience you should also update the deployment:
- name: http
containerPort: 80
protocol: TCP
Related
Here's my config:
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-deployment
spec:
revisionHistoryLimit: 3
selector:
matchLabels:
pod: 338f54d2-8f89-4602-a848-efcbcb63233f
template:
metadata:
labels:
pod: 338f54d2-8f89-4602-a848-efcbcb63233f
svc: app
spec:
imagePullSecrets:
- name: regcred
containers:
- name: server
image: server
ports:
- name: http-port
containerPort: 3000
resources:
limits:
memory: 128Mi
requests:
memory: 36Mi
envFrom:
- secretRef:
name: db-env
- secretRef:
name: oauth-env
startupProbe:
httpGet:
port: http
path: /
initialDelaySeconds: 1
periodSeconds: 1
failureThreshold: 10
livenessProbe:
httpGet:
port: http
path: /
periodSeconds: 15
---
apiVersion: v1
kind: Service
metadata:
name: app-service
spec:
selector:
pod: 338f54d2-8f89-4602-a848-efcbcb63233f
ports:
- port: 80
targetPort: http-port
When I try that I can't connect to my site. When I change targetPort: http-port back to targetPort: 3000 it works fine. I thought the point of naming my port was so that I could use it in the targetPort. Does it not work with deployments?
Good afternoon
I am working with ingress-nginx for service exposure in an on-premise kubernetes cluster. In this cluster we manage 2 Environment: Development (DEV) and Quality (QA).
What we want is to somehow have 1 ingress-nginx for each environment (DEV and QA), but so far I have not been able to configure it, I am applying the following configuration but I cannot do that for the IP indicated in the controller between the requests according to the environment, example:
DEV environment
controller-deployment-dev.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
helm.sh/chart: ingress-nginx-4.0.15
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 1.1.1
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx-controller-dev
namespace: ingress-nginx
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: ingress-nginx-dev
app.kubernetes.io/instance: ingress-nginx-dev
app.kubernetes.io/component: controller-dev
revisionHistoryLimit: 10
minReadySeconds: 0
template:
metadata:
labels:
app.kubernetes.io/name: ingress-nginx-dev
app.kubernetes.io/instance: ingress-nginx-dev
app.kubernetes.io/component: controller-dev
spec:
dnsPolicy: ClusterFirst
imagePullSecrets:
- name: regcred
containers:
- name: controller
image: 10.164.7.203:37003/tmve/ingress-nginx/controller:v1.1.1
imagePullPolicy: IfNotPresent
lifecycle:
preStop:
exec:
command:
- /wait-shutdown
args:
- /nginx-ingress-controller
- --election-id=ingress-controller-leader
- --controller-class=k8s.io/ingress-nginx
- --configmap=$(POD_NAMESPACE)/ingress-nginx-controller
- --validating-webhook=:8443
- --validating-webhook-certificate=/usr/local/certificates/cert
- --validating-webhook-key=/usr/local/certificates/key
- --default-ssl-certificate=develop/srvdevma1-ssl
securityContext:
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
runAsUser: 101
allowPrivilegeEscalation: true
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: LD_PRELOAD
value: /usr/local/lib/libmimalloc.so
livenessProbe:
failureThreshold: 5
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
ports:
- name: http
containerPort: 80
protocol: TCP
- name: https
containerPort: 443
protocol: TCP
- name: webhook
containerPort: 8443
protocol: TCP
volumeMounts:
- name: webhook-cert
mountPath: /usr/local/certificates/
readOnly: true
resources:
requests:
cpu: 1
memory: 512Mi
nodeSelector:
kubernetes.io/hostname: tcold016
serviceAccountName: ingress-nginx
terminationGracePeriodSeconds: 300
volumes:
- name: webhook-cert
secret:
secretName: ingress-nginx-admission
controller-svc-dev.yaml
apiVersion: v1
kind: Service
metadata:
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 1.1.1
app.kubernetes.io/component: controller
name: ingress-nginx-controller-dev
annotations:
metallb.universe.tf/allow-shared-ip: shared-ip
namespace: ingress-nginx
spec:
externalTrafficPolicy: Cluster
loadBalancerIP: 10.161.169.12
type: LoadBalancer
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
appProtocol: http
- name: https
port: 30000
protocol: TCP
targetPort: https
appProtocol: https
selector:
app.kubernetes.io/name: ingress-nginx-dev
app.kubernetes.io/instance: ingress-nginx-dev
app.kubernetes.io/component: controller-dev
rules ingress dev
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-develop
namespace: develop
annotations:
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
spec:
ingressClassName: nginx
tls:
- secretName: srvdevma1-ssl
rules:
- http:
paths:
- path: /api/FindComplementaryAccountInfo
pathType: Prefix
backend:
service:
name: find-complementary-account-info
port:
number: 8083
- path: /api/FindLimitedPackageBS
pathType: Prefix
backend:
service:
name: find-limited-package
port:
number: 8082
- path: /api/SendSMSBS
pathType: Prefix
backend:
service:
name: send-sms
port:
number: 8084
- path: /api/SubscribeLimitedPackageCS
pathType: Prefix
backend:
service:
name: subscribe-limited-package
port:
number: 8085
To consume services in the development environment we use the ip indicated in the controller-deployment-dev and port 30000
https://10.161.169.12:30000/api/FindLimitedPackageBS
https://10.161.169.12:30000/api/FindComplementaryAccountInfo
QA environment
For the quality environment I have the following configuration, very similar to that of develop, only with a different IP:
controller-deployment-qa.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
helm.sh/chart: ingress-nginx-4.0.15
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 1.1.1
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx-controller-tcold
namespace: ingress-nginx
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: ingress-nginx-qa
app.kubernetes.io/instance: ingress-nginx-qa
app.kubernetes.io/component: controller-qa
revisionHistoryLimit: 10
minReadySeconds: 0
template:
metadata:
labels:
app.kubernetes.io/name: ingress-nginx-qa
app.kubernetes.io/instance: ingress-nginx-qa
app.kubernetes.io/component: controller-qa
spec:
dnsPolicy: ClusterFirst
imagePullSecrets:
- name: regcred
containers:
- name: controller
image: 10.164.7.203:37003/tmve/ingress-nginx/controller:v1.1.1
imagePullPolicy: IfNotPresent
lifecycle:
preStop:
exec:
command:
- /wait-shutdown
args:
- /nginx-ingress-controller
- --election-id=ingress-controller-leader
- --controller-class=k8s.io/ingress-nginx
- --configmap=$(POD_NAMESPACE)/ingress-nginx-controller
- --validating-webhook=:8443
- --validating-webhook-certificate=/usr/local/certificates/cert
- --validating-webhook-key=/usr/local/certificates/key
- --default-ssl-certificate=develop/srvdevma1-ssl
securityContext:
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
runAsUser: 101
allowPrivilegeEscalation: true
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: LD_PRELOAD
value: /usr/local/lib/libmimalloc.so
livenessProbe:
failureThreshold: 5
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
ports:
- name: http
containerPort: 80
protocol: TCP
- name: https
containerPort: 443
protocol: TCP
- name: webhook
containerPort: 8443
protocol: TCP
volumeMounts:
- name: webhook-cert
mountPath: /usr/local/certificates/
readOnly: true
resources:
requests:
cpu: 1
memory: 512Mi
nodeSelector:
kubernetes.io/hostname: tcolt022
serviceAccountName: ingress-nginx
terminationGracePeriodSeconds: 300
volumes:
- name: webhook-cert
secret:
secretName: ingress-nginx-admission
controller-svc-qa.yaml
apiVersion: v1
kind: Service
metadata:
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 1.1.1
app.kubernetes.io/component: controller
name: ingress-nginx-controller-qa
annotations:
metallb.universe.tf/allow-shared-ip: shared-ip
namespace: ingress-nginx
spec:
externalTrafficPolicy: Cluster
loadBalancerIP: 10.161.173.45
type: LoadBalancer
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
appProtocol: http
- name: https
port: 30000
protocol: TCP
targetPort: https
appProtocol: https
selector:
app.kubernetes.io/name: ingress-nginx-qa
app.kubernetes.io/instance: ingress-nginx-qa
app.kubernetes.io/component: controller-qa
rules ingress qa
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-calidad
namespace: calidad
annotations:
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
spec:
ingressClassName: nginx
tls:
- secretName: srvdevma1-ssl
rules:
- http:
paths:
- path: /api/FindComplementaryAccountInfo
pathType: Prefix
backend:
service:
name: find-complementary-account-info
port:
number: 8083
- path: /api/FindLimitedPackageBS
pathType: Prefix
backend:
service:
name: find-limited-package
port:
number: 8082
- path: /api/SendSMSBS
pathType: Prefix
backend:
service:
name: send-sms
port:
number: 8084
- path: /api/SubscribeLimitedPackageCS
pathType: Prefix
backend:
service:
name: subscribe-limited-package
port:
number: 8085
And so you should be able to consult the services in this environment, with respect to development you should only change the IP:
https://10.161.173.45:30000/api/FindLimitedPackageBS
https://10.161.173.45:30000/api/FindComplementaryAccountInfo
Is there any way to do what I indicate through ingress-nginx, with the condition that it is required to maintain the same rules for the services but in different namespaces
Update
I managed to find a solution through the following very good documentation:
https://kubernetes.github.io/ingress-nginx/user-guide/multiple-ingress/
You can achieve this use case by using Ingress Classes. Ingresses can be implemented by different controllers, often with different configurations. Each Ingress should specify a class, a reference to an IngressClass resource that contains additional configuration including the name of the controller that should implement the class.
You can deploy two Ingress controllers by granting them control over two different IngressClasses, then selecting one of the two IngressClasses with ingressClassName. Ensure the --controller-class= and --ingress-class are set to something different on each ingress controller.
Firstly, specify '--controller-class=k8s.io/internal-ingress-nginx' and '--ingress-class=k8s.io/internal-nginx' in the ingress-nginx deployment. Then use the same value of controller-class in the IngressClass. And refer to that IngressClass in your Ingress using ingressClassName. Refer Multiple Ingress Controllers for more information on how to set Ingress class.
Note: If --controller-class is set to the default value of k8s.io/ingress-nginx, the controller will monitor Ingresses with no class annotation and Ingresses with annotation class set to nginx. Use a non-default value for --controller-class, to ensure that the controller only satisfied the specific class of Ingresses.
You can configure the Ingress Controller to handle configuration resources only from a particular namespace, which is controlled through the -watch-namespace command-line argument. This can be useful if you want to use different NGINX Ingress Controllers for different applications, both in terms of isolation and/or operation.
-watch-namespace is used to watch Namespace for Ingress resources. By default the Ingress Controller watches all namespaces.
Refer Running Multiple Ingress Controllers for more information.
I hope you can help me, I have the following problem with istio I want to receive HTTPS requests but I get the error "curl: (52) Empty response from server", however the HTTP requests work correctly, I attach my manifests.
A certificate has already been generated and a secret has been created with the .crt and .key files
I don't know what else I need so that https requests can work
Istio Version: 1.8.2
Kubectl version client: 1.20.2
Response for HTTP & HTTPS
CURL -Iv for HTTPS
Gateway.yaml
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: eks-gateway
namespace: development
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- mysite.domine.com
tls:
httpsRedirect: false
- port:
number: 443
name: https
protocol: HTTPS
hosts:
- mysite.domine.com
tls:
mode: SIMPLE
credentialName: mysite-secret
VirtualService.yaml
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: eks-virtualservice
namespace: development
spec:
hosts:
- mysite.domine.com
gateways:
- eks-gateway
http:
- match:
- uri:
prefix: /WeatherForecast
route:
- destination:
host: eks-service
port:
number: 80
tls:
- match:
- port: 443
sniHosts:
- mysite.domine.com
route:
- destination:
host: eks-service
port:
number: 80
DestinationRule.yaml
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: eks-destinationrule
namespace: development
spec:
host: eks-service
trafficPolicy:
loadBalancer:
simple: LEAST_CONN
Service.yaml
apiVersion: v1
kind: Service
metadata:
name: eks-service
namespace: development
labels:
app: eks-app
spec:
type: ClusterIP
ports:
- port: 80
targetPort: container-port
protocol: TCP
name: http-sv
- port: 443
targetPort: container-port
protocol: TCP
name: https-sv
selector:
app: eks-app
Deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
creationTimestamp: "2021-02-12T07:40:55Z"
generation: 1
labels:
app: eks-app
app.kubernetes.io/version: v1
draft: draft-app
spec:
progressDeadlineSeconds: 600
replicas: 4
revisionHistoryLimit: 0
selector:
matchLabels:
app: eks-app
version: v1
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
annotations:
buildID: ""
creationTimestamp: null
labels:
app: eks-app
draft: draft-app
version: v1
spec:
containers:
- image: XXXXXXXXXX.dkr.ecr.us-east-1.amazonaws.com/DockerRepo:v1
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /liveness
port: container-port
scheme: HTTP
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
name: eks-app
ports:
- containerPort: 80
name: container-port
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /readiness
port: container-port
scheme: HTTP
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
terminationGracePeriodSeconds: 30
status:
availableReplicas: 4
observedGeneration: 1
readyReplicas: 4
replicas: 4
updatedReplicas: 4
I think it is a similar issue like with these questions: Gateway and Virtual Service definitions don't match: the gateway terminates TLS, but the virtual service is defined to receive TLS traffic.
Try just deleting the TLS part in the Virtual Service.
How do I properly HTTPS secure an application when using Istio?
Kubernetes Istio exposure not working with Virtualservice and Gateway
apparently there is a problem between istio and eks, so I decided to install aws controller to get it to work properly.
I have a requirement to make my application to support the request over https and block the http port.I want to use certificate provided my company so do i need the jks certs or some other type. Im not sure how to make it https in gke. I have seen couple of documentation but they are not clear.This is my current kubernetes deployment file.Please let me know how can i configure it.
apiVersion: v1
kind: Service
metadata:
name: oms-integeration-service
spec:
type: NodePort
ports:
- port: 80
targetPort: 8081
protocol: TCP
name: http
selector:
app: integeration
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: integeration
spec:
replicas: 2
template:
metadata:
labels:
app: integeration
spec:
containers:
- name: esp
image: gcr.io/endpoints-release/endpoints-runtime:1
args: [
"--http_port=8081",
"--backend=127.0.0.1:8080",
"--service=oms.endpoints.gcp-dsw-oms-int-{{env}}.cloud.goog",
"--rollout_strategy=managed",
]
- name: integeration-container
image: us.gcr.io/gcp-dsw-oms-int-{{env}}/gke/oms-integ-service:{{tag}}
readinessProbe:
httpGet:
path: /healthcheck
port: 8080
initialDelaySeconds: 60
periodSeconds: 10
ports:
- containerPort: 8080
resources:
requests:
memory: 500M
env:
- name: LOGGING_FILE
value: "integeration-container"
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: integeration-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: "oms-int-ip"
kubernetes.io/ingress.class: "gce"
rules:
- host: "oms.endpoints.gcp-dsw-oms-int-{{env}}.cloud.goog"
http:
paths:
- path: /*
backend:
serviceName: oms-integeration-service
servicePort: 80
You have to create a secret that contains your SSL certificate and then reference that secret in your ingress spec as explained here
I have a couple of services and the loadbalancers work fine. Now I keep facing an issue with a service that runs fine, but when a loadbalancer is applied I cannot get it to work, because one service seams to be unhealty, but I cannot figure out why. How can I get that service healthy?
Here are my k8s yaml.
Deployment:
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: api-production
spec:
replicas: 1
template:
metadata:
name: api
labels:
app: api
role: backend
env: production
spec:
containers:
- name: api
image: eu.gcr.io/foobar/api:1.0.0
livenessProbe:
httpGet:
path: /readinez
port: 8080
initialDelaySeconds: 45
periodSeconds: 10
readinessProbe:
httpGet:
path: /healthz
port: 8080
env:
- name: ENVIRONMENT
value: "production"
- name: GIN_MODE
value: "release"
resources:
limits:
memory: "500Mi"
cpu: "100m"
imagePullPolicy: Always
ports:
- name: api
containerPort: 8080
Service.yaml
kind: Service
apiVersion: v1
metadata:
name: api
spec:
selector:
app: api
role: backend
type: NodePort
ports:
- name: http
port: 8080
- name: external
port: 80
targetPort: 80
Ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: api
namespace: production
annotations:
kubernetes.io/tls-acme: "true"
kubernetes.io/ingress.class: "gce"
spec:
tls:
- hosts:
- foo.bar.io
secretName: api-tls
rules:
- host: foo.bar.io
http:
paths:
- path: /*
backend:
serviceName: api
servicePort: 80
The problem was solved by configuring the ports in the correct way. Container, Service and LB need (obviously) to be aligned. I also added the initialDelaySeconds.
LB:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: api
namespace: production
annotations:
# kubernetes.io/ingress.allow-http: "false"
kubernetes.io/tls-acme: "true"
kubernetes.io/ingress.class: "gce"
spec:
tls:
- hosts:
- api.foo.io
secretName: api-tls
rules:
- host: api.foo.io
http:
paths:
- path: /*
backend:
serviceName: api
servicePort: 8080
Service:
kind: Service
apiVersion: v1
metadata:
name: api
spec:
selector:
app: api
role: backend
type: NodePort
ports:
- protocol: TCP
port: 8080
targetPort: 8080
name: http
Deployment:
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: api-production
spec:
replicas: 1
template:
metadata:
name: api
labels:
app: api
role: backend
env: production
spec:
containers:
- name: api
image: eu.gcr.io/foobarbar/api:1.0.0
livenessProbe:
httpGet:
path: /readinez
port: 8080
initialDelaySeconds: 45
periodSeconds: 10
readinessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 45
env:
- name: ENVIRONMENT
value: "production"
- name: GIN_MODE
value: "release"
resources:
limits:
memory: "500Mi"
cpu: "100m"
imagePullPolicy: Always
ports:
- containerPort: 8080