Unhealthy load balancer on GCE - kubernetes

I have a couple of services and the loadbalancers work fine. Now I keep facing an issue with a service that runs fine, but when a loadbalancer is applied I cannot get it to work, because one service seams to be unhealty, but I cannot figure out why. How can I get that service healthy?
Here are my k8s yaml.
Deployment:
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: api-production
spec:
replicas: 1
template:
metadata:
name: api
labels:
app: api
role: backend
env: production
spec:
containers:
- name: api
image: eu.gcr.io/foobar/api:1.0.0
livenessProbe:
httpGet:
path: /readinez
port: 8080
initialDelaySeconds: 45
periodSeconds: 10
readinessProbe:
httpGet:
path: /healthz
port: 8080
env:
- name: ENVIRONMENT
value: "production"
- name: GIN_MODE
value: "release"
resources:
limits:
memory: "500Mi"
cpu: "100m"
imagePullPolicy: Always
ports:
- name: api
containerPort: 8080
Service.yaml
kind: Service
apiVersion: v1
metadata:
name: api
spec:
selector:
app: api
role: backend
type: NodePort
ports:
- name: http
port: 8080
- name: external
port: 80
targetPort: 80
Ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: api
namespace: production
annotations:
kubernetes.io/tls-acme: "true"
kubernetes.io/ingress.class: "gce"
spec:
tls:
- hosts:
- foo.bar.io
secretName: api-tls
rules:
- host: foo.bar.io
http:
paths:
- path: /*
backend:
serviceName: api
servicePort: 80

The problem was solved by configuring the ports in the correct way. Container, Service and LB need (obviously) to be aligned. I also added the initialDelaySeconds.
LB:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: api
namespace: production
annotations:
# kubernetes.io/ingress.allow-http: "false"
kubernetes.io/tls-acme: "true"
kubernetes.io/ingress.class: "gce"
spec:
tls:
- hosts:
- api.foo.io
secretName: api-tls
rules:
- host: api.foo.io
http:
paths:
- path: /*
backend:
serviceName: api
servicePort: 8080
Service:
kind: Service
apiVersion: v1
metadata:
name: api
spec:
selector:
app: api
role: backend
type: NodePort
ports:
- protocol: TCP
port: 8080
targetPort: 8080
name: http
Deployment:
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: api-production
spec:
replicas: 1
template:
metadata:
name: api
labels:
app: api
role: backend
env: production
spec:
containers:
- name: api
image: eu.gcr.io/foobarbar/api:1.0.0
livenessProbe:
httpGet:
path: /readinez
port: 8080
initialDelaySeconds: 45
periodSeconds: 10
readinessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 45
env:
- name: ENVIRONMENT
value: "production"
- name: GIN_MODE
value: "release"
resources:
limits:
memory: "500Mi"
cpu: "100m"
imagePullPolicy: Always
ports:
- containerPort: 8080

Related

Configure Ingress-Nginix in Cluster Kubernetes in 2 Namespace

Good afternoon
I am working with ingress-nginx for service exposure in an on-premise kubernetes cluster. In this cluster we manage 2 Environment: Development (DEV) and Quality (QA).
What we want is to somehow have 1 ingress-nginx for each environment (DEV and QA), but so far I have not been able to configure it, I am applying the following configuration but I cannot do that for the IP indicated in the controller between the requests according to the environment, example:
DEV environment
controller-deployment-dev.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
helm.sh/chart: ingress-nginx-4.0.15
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 1.1.1
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx-controller-dev
namespace: ingress-nginx
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: ingress-nginx-dev
app.kubernetes.io/instance: ingress-nginx-dev
app.kubernetes.io/component: controller-dev
revisionHistoryLimit: 10
minReadySeconds: 0
template:
metadata:
labels:
app.kubernetes.io/name: ingress-nginx-dev
app.kubernetes.io/instance: ingress-nginx-dev
app.kubernetes.io/component: controller-dev
spec:
dnsPolicy: ClusterFirst
imagePullSecrets:
- name: regcred
containers:
- name: controller
image: 10.164.7.203:37003/tmve/ingress-nginx/controller:v1.1.1
imagePullPolicy: IfNotPresent
lifecycle:
preStop:
exec:
command:
- /wait-shutdown
args:
- /nginx-ingress-controller
- --election-id=ingress-controller-leader
- --controller-class=k8s.io/ingress-nginx
- --configmap=$(POD_NAMESPACE)/ingress-nginx-controller
- --validating-webhook=:8443
- --validating-webhook-certificate=/usr/local/certificates/cert
- --validating-webhook-key=/usr/local/certificates/key
- --default-ssl-certificate=develop/srvdevma1-ssl
securityContext:
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
runAsUser: 101
allowPrivilegeEscalation: true
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: LD_PRELOAD
value: /usr/local/lib/libmimalloc.so
livenessProbe:
failureThreshold: 5
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
ports:
- name: http
containerPort: 80
protocol: TCP
- name: https
containerPort: 443
protocol: TCP
- name: webhook
containerPort: 8443
protocol: TCP
volumeMounts:
- name: webhook-cert
mountPath: /usr/local/certificates/
readOnly: true
resources:
requests:
cpu: 1
memory: 512Mi
nodeSelector:
kubernetes.io/hostname: tcold016
serviceAccountName: ingress-nginx
terminationGracePeriodSeconds: 300
volumes:
- name: webhook-cert
secret:
secretName: ingress-nginx-admission
controller-svc-dev.yaml
apiVersion: v1
kind: Service
metadata:
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 1.1.1
app.kubernetes.io/component: controller
name: ingress-nginx-controller-dev
annotations:
metallb.universe.tf/allow-shared-ip: shared-ip
namespace: ingress-nginx
spec:
externalTrafficPolicy: Cluster
loadBalancerIP: 10.161.169.12
type: LoadBalancer
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
appProtocol: http
- name: https
port: 30000
protocol: TCP
targetPort: https
appProtocol: https
selector:
app.kubernetes.io/name: ingress-nginx-dev
app.kubernetes.io/instance: ingress-nginx-dev
app.kubernetes.io/component: controller-dev
rules ingress dev
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-develop
namespace: develop
annotations:
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
spec:
ingressClassName: nginx
tls:
- secretName: srvdevma1-ssl
rules:
- http:
paths:
- path: /api/FindComplementaryAccountInfo
pathType: Prefix
backend:
service:
name: find-complementary-account-info
port:
number: 8083
- path: /api/FindLimitedPackageBS
pathType: Prefix
backend:
service:
name: find-limited-package
port:
number: 8082
- path: /api/SendSMSBS
pathType: Prefix
backend:
service:
name: send-sms
port:
number: 8084
- path: /api/SubscribeLimitedPackageCS
pathType: Prefix
backend:
service:
name: subscribe-limited-package
port:
number: 8085
To consume services in the development environment we use the ip indicated in the controller-deployment-dev and port 30000
https://10.161.169.12:30000/api/FindLimitedPackageBS
https://10.161.169.12:30000/api/FindComplementaryAccountInfo
QA environment
For the quality environment I have the following configuration, very similar to that of develop, only with a different IP:
controller-deployment-qa.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
helm.sh/chart: ingress-nginx-4.0.15
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 1.1.1
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx-controller-tcold
namespace: ingress-nginx
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: ingress-nginx-qa
app.kubernetes.io/instance: ingress-nginx-qa
app.kubernetes.io/component: controller-qa
revisionHistoryLimit: 10
minReadySeconds: 0
template:
metadata:
labels:
app.kubernetes.io/name: ingress-nginx-qa
app.kubernetes.io/instance: ingress-nginx-qa
app.kubernetes.io/component: controller-qa
spec:
dnsPolicy: ClusterFirst
imagePullSecrets:
- name: regcred
containers:
- name: controller
image: 10.164.7.203:37003/tmve/ingress-nginx/controller:v1.1.1
imagePullPolicy: IfNotPresent
lifecycle:
preStop:
exec:
command:
- /wait-shutdown
args:
- /nginx-ingress-controller
- --election-id=ingress-controller-leader
- --controller-class=k8s.io/ingress-nginx
- --configmap=$(POD_NAMESPACE)/ingress-nginx-controller
- --validating-webhook=:8443
- --validating-webhook-certificate=/usr/local/certificates/cert
- --validating-webhook-key=/usr/local/certificates/key
- --default-ssl-certificate=develop/srvdevma1-ssl
securityContext:
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
runAsUser: 101
allowPrivilegeEscalation: true
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: LD_PRELOAD
value: /usr/local/lib/libmimalloc.so
livenessProbe:
failureThreshold: 5
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
ports:
- name: http
containerPort: 80
protocol: TCP
- name: https
containerPort: 443
protocol: TCP
- name: webhook
containerPort: 8443
protocol: TCP
volumeMounts:
- name: webhook-cert
mountPath: /usr/local/certificates/
readOnly: true
resources:
requests:
cpu: 1
memory: 512Mi
nodeSelector:
kubernetes.io/hostname: tcolt022
serviceAccountName: ingress-nginx
terminationGracePeriodSeconds: 300
volumes:
- name: webhook-cert
secret:
secretName: ingress-nginx-admission
controller-svc-qa.yaml
apiVersion: v1
kind: Service
metadata:
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 1.1.1
app.kubernetes.io/component: controller
name: ingress-nginx-controller-qa
annotations:
metallb.universe.tf/allow-shared-ip: shared-ip
namespace: ingress-nginx
spec:
externalTrafficPolicy: Cluster
loadBalancerIP: 10.161.173.45
type: LoadBalancer
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
appProtocol: http
- name: https
port: 30000
protocol: TCP
targetPort: https
appProtocol: https
selector:
app.kubernetes.io/name: ingress-nginx-qa
app.kubernetes.io/instance: ingress-nginx-qa
app.kubernetes.io/component: controller-qa
rules ingress qa
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-calidad
namespace: calidad
annotations:
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
spec:
ingressClassName: nginx
tls:
- secretName: srvdevma1-ssl
rules:
- http:
paths:
- path: /api/FindComplementaryAccountInfo
pathType: Prefix
backend:
service:
name: find-complementary-account-info
port:
number: 8083
- path: /api/FindLimitedPackageBS
pathType: Prefix
backend:
service:
name: find-limited-package
port:
number: 8082
- path: /api/SendSMSBS
pathType: Prefix
backend:
service:
name: send-sms
port:
number: 8084
- path: /api/SubscribeLimitedPackageCS
pathType: Prefix
backend:
service:
name: subscribe-limited-package
port:
number: 8085
And so you should be able to consult the services in this environment, with respect to development you should only change the IP:
https://10.161.173.45:30000/api/FindLimitedPackageBS
https://10.161.173.45:30000/api/FindComplementaryAccountInfo
Is there any way to do what I indicate through ingress-nginx, with the condition that it is required to maintain the same rules for the services but in different namespaces
Update
I managed to find a solution through the following very good documentation:
https://kubernetes.github.io/ingress-nginx/user-guide/multiple-ingress/
You can achieve this use case by using Ingress Classes. Ingresses can be implemented by different controllers, often with different configurations. Each Ingress should specify a class, a reference to an IngressClass resource that contains additional configuration including the name of the controller that should implement the class.
You can deploy two Ingress controllers by granting them control over two different IngressClasses, then selecting one of the two IngressClasses with ingressClassName. Ensure the --controller-class= and --ingress-class are set to something different on each ingress controller.
Firstly, specify '--controller-class=k8s.io/internal-ingress-nginx' and '--ingress-class=k8s.io/internal-nginx' in the ingress-nginx deployment. Then use the same value of controller-class in the IngressClass. And refer to that IngressClass in your Ingress using ingressClassName. Refer Multiple Ingress Controllers for more information on how to set Ingress class.
Note: If --controller-class is set to the default value of k8s.io/ingress-nginx, the controller will monitor Ingresses with no class annotation and Ingresses with annotation class set to nginx. Use a non-default value for --controller-class, to ensure that the controller only satisfied the specific class of Ingresses.
You can configure the Ingress Controller to handle configuration resources only from a particular namespace, which is controlled through the -watch-namespace command-line argument. This can be useful if you want to use different NGINX Ingress Controllers for different applications, both in terms of isolation and/or operation.
-watch-namespace is used to watch Namespace for Ingress resources. By default the Ingress Controller watches all namespaces.
Refer Running Multiple Ingress Controllers for more information.

Hangfire dashboard url setup on kubernetes

I have hangfire dashboard working properly on local environment. Now, I'm trying to setup it as a container (pod) inside my cluster, deployed to azure so I can access hangfire dashboard through its url. However, I'm having issue access to it.
Below is my setup:
[UsedImplicitly]
public void Configure(IApplicationBuilder app)
{
var hangFireServerOptions = new BackgroundJobServerOptions
{
Activator = new ContainerJobActivator(app.ApplicationServices)
};
app.UseHealthChecks("/liveness");
app.UseHangfireServer(hangFireServerOptions);
app.UseHangfireDashboard("/hangfire", new DashboardOptions()
{
AppPath = null,
DashboardTitle = "Hangfire Dashboard",
Authorization = new[]
{
new HangfireCustomBasicAuthenticationFilter
{
User = Configuration.GetSection("HangfireCredentials:UserName").Value,
Pass = Configuration.GetSection("HangfireCredentials:Password").Value
}
}
});
app.UseHttpsRedirection();
app.UseRouting();
app.UseAuthorization();
app.UseEndpoints(endpoints =>
{
endpoints.MapControllers();
});
HangfireJobScheduler.ScheduleJobs(app.ApplicationServices.GetServices<IScheduledTask>()); //serviceProvider.GetServices<IScheduledTask>()
}
Service.yml
apiVersion: v1
kind: Service
metadata:
name: task-scheduler-api
spec:
ports:
- port: 80
targetPort: 80
name: http
- port: 443
targetPort: 443
name: https
selector:
app: task-scheduler-api
Deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: task-scheduler
spec:
selector:
matchLabels:
app: task-scheduler
template:
metadata:
labels:
app: task-scheduler
spec:
containers:
- name: task-scheduler
image: <%image-name%>
# Resources and limit
resources:
requests:
cpu: <%cpu_request%>
memory: <%memory_request%>
limits:
cpu: <%cpu_limit%>
memory: <%memory_limit%>
ports:
- name: http
containerPort: 80
- name: https
containerPort: 443
readinessProbe:
httpGet:
path: /liveness
port: 80
initialDelaySeconds: 3
periodSeconds: 5
timeoutSeconds: 30
livenessProbe:
httpGet:
path: /liveness
port: 80
initialDelaySeconds: 15
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 7
Ingress.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: "nginx"
cert-manager.io/cluster-issuer: letsencrypt
nginx.ingress.kubernetes.io/rewrite-target: /
name: task-scheulder-api-ingress
namespace: default
spec:
rules:
- host: example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: task-scheduler-api
port:
number: 80
tls:
- hosts:
- example.com
secretName: task-scheduler-tls-production
I'm trying to access the dashboard by running: example.com/hangfire, but got 503 Service Temporarily Unavailable.
I'm checking logs on the pod. Every seem to be fine:
...
...
Content root path: /data
Now listening on: http://0.0.0.0:80
Now listening on: https://0.0.0.0:443
Application started. Press Ctrl+C to shut down.
....
Would anyone know what I'm missing and how to resolve it ? Thank you
I have figured out the issue. The main issue is that I did not have the match value for selector app in deployment.yml and service.yml. If I do kubectl get ep, it is showing me that I do not have any endpoint assign to the task scheduler pod, meaning that it is not really deployed yet.
As soon as I updated values in the deployment.yml and service.yml, url is accessible.
service.yml
apiVersion: v1
kind: Service
metadata:
name: task-scheduler-api
spec:
ports:
- port: 80
targetPort: 80
name: http
- port: 443
targetPort: 443
name: https
selector:
app: task-scheduler-api # This needs to match with value inside the deployment
deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: task-scheduler
spec:
selector:
matchLabels:
app: task-scheduler # This needs to match with value in service
template:
metadata:
labels:
app: task-scheduler # This needs to match as well
spec:
containers:
- name: task-scheduler
image: <%image-name%>
# Resources and limit
resources:
requests:
cpu: <%cpu_request%>
memory: <%memory_request%>
limits:
cpu: <%cpu_limit%>
memory: <%memory_limit%>
ports:
- name: http
containerPort: 80
- name: https
containerPort: 443
readinessProbe:
httpGet:
path: /liveness
port: 80
initialDelaySeconds: 3
periodSeconds: 5
timeoutSeconds: 30
livenessProbe:
httpGet:
path: /liveness
port: 80
initialDelaySeconds: 15
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 7
Hopefully someone would find it useful. Thank you
This could be related to the ingress class, bcs this moved from annotation to an own field in networking.k8s.io/v1 :
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: task-scheulder-api-ingress
annotations:
cert-manager.io/cluster-issuer: letsencrypt
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
ingressClassName: nginx
tls:
- hosts:
- "example.com"
secretName: task-scheduler-tls-production
rules:
- host: "example.com"
http:
paths:
- path: /hangfire
pathType: ImplementationSpecific
backend:
service:
name: task-scheduler-api
port:
number: 8080
You also do not need to specify port 80 & 443 at the service as the ingress is responsible for implementing TLS:
apiVersion: v1
kind: Service
metadata:
name: task-scheduler-api
spec:
ports:
- port: 8080
targetPort: http
protocol: TCP
name: http
selector:
app: task-scheduler-api
For convenience you should also update the deployment:
- name: http
containerPort: 80
protocol: TCP

Expose a redis cluster - with a kubernetes statefulset to the internet

I created a statefulset that deploys a redis image to GCP on kubernetes. The challenge I am having is exposing it using a single domain name. Such that the pods can be accessed in the following order - redis.com/first, redis.com/second, redis.com/third
here are the YAML files
Statefulset
kind: StatefulSet
metadata:
name: app-redis
spec:
selector:
matchLabels:
app: apprenticeship-redis
serviceName: 'redis-service'
replicas: 3
template:
metadata:
labels:
app: app-redis
spec:
terminationGracePeriodSeconds: 10
containers:
- name: app-redis
image: redis
args:
- /etc/redis/redis.conf
volumeMounts:
- mountPath: /etc/redis
name: redis-config
readOnly: false
- name: redis-storage
mountPath: /data
readOnly: false
resources:
requests:
cpu: 50m
memory: 128Mi
limits:
cpu: 150m
memory: 256Mi
ports:
- containerPort: 6379
name: redis
livenessProbe:
exec:
command: ['redis-cli', 'ping']
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 2
volumes:
- name: redis-config
configMap:
name: redis-config
volumeClaimTemplates:
- metadata:
name: redis-storage
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
Headless service
apiVersion: v1
kind: Service
metadata:
labels:
app: app-redis
name: redis-service
namespace: default
spec:
ports:
- name: server-port
port: 80
protocol: TCP
targetPort: 6379
clusterIP: None
selector:
statefulset.kubernetes.io/pod-name: app-redis-0
Loadbalancer
apiVersion: v1
kind: Service
metadata:
labels:
app: redis-service
name: app-redis
spec:
externalTrafficPolicy: Local
ports:
- port: 80
protocol: TCP
targetPort: 6379
selector:
app: app-redis
type: LoadBalancer
loadBalancerIP: xx.xx.xx.xxx
status:
loadBalancer:
ingress:
- ip: xx.xx.xx.xxx
Config map
apiVersion: v1
kind: ConfigMap
metadata:
name: redis-config
namespace: default
data:
redis.conf: |
dbfilename "dump.rdb"
dir /data
save 3600 1
save 300 10
save 60 100
appendonly yes
appendfilename "appendonly.aof"
Storage class
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: redis-storage
provisioner: kubernetes.io/gce-pd
Ingress
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: redis-ingress
annotations:
kubernetes.io/ingress.class: 'nginx'
nginx.ingress.kubernetes.io/force-ssl-redirect: 'false'
spec:
rules:
- host: app-redis.tk
http:
paths:
- path: /
backend:
serviceName: app-redis
servicePort: 80
Each pod in the StatefulSet will need to have a service linking to it.
This service will need to be created with:
selector:
statefulset.kubernetes.io/pod-name: <POD_NAME>
Then you will be able to set ingress and use it to redirect traffic based on path:
...
spec:
rules:
- http:
paths:
- path: /app-redis-0
backend:
serviceName: redis-service-0
servicePort: 6379
- path: /app-redis-1
backend:
serviceName: redis-service-1
servicePort: 6379
- path: /app-redis-2
backend:
serviceName: redis-service-2
servicePort: 6379
...
You can read about Exposing StatefulSets in Kubernetes and Kubernetes NodePort vs LoadBalancer vs Ingress? When should I use what?

How to setup HTTPS load balancer in kubernetes

I have a requirement to make my application to support the request over https and block the http port.I want to use certificate provided my company so do i need the jks certs or some other type. Im not sure how to make it https in gke. I have seen couple of documentation but they are not clear.This is my current kubernetes deployment file.Please let me know how can i configure it.
apiVersion: v1
kind: Service
metadata:
name: oms-integeration-service
spec:
type: NodePort
ports:
- port: 80
targetPort: 8081
protocol: TCP
name: http
selector:
app: integeration
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: integeration
spec:
replicas: 2
template:
metadata:
labels:
app: integeration
spec:
containers:
- name: esp
image: gcr.io/endpoints-release/endpoints-runtime:1
args: [
"--http_port=8081",
"--backend=127.0.0.1:8080",
"--service=oms.endpoints.gcp-dsw-oms-int-{{env}}.cloud.goog",
"--rollout_strategy=managed",
]
- name: integeration-container
image: us.gcr.io/gcp-dsw-oms-int-{{env}}/gke/oms-integ-service:{{tag}}
readinessProbe:
httpGet:
path: /healthcheck
port: 8080
initialDelaySeconds: 60
periodSeconds: 10
ports:
- containerPort: 8080
resources:
requests:
memory: 500M
env:
- name: LOGGING_FILE
value: "integeration-container"
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: integeration-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: "oms-int-ip"
kubernetes.io/ingress.class: "gce"
rules:
- host: "oms.endpoints.gcp-dsw-oms-int-{{env}}.cloud.goog"
http:
paths:
- path: /*
backend:
serviceName: oms-integeration-service
servicePort: 80
You have to create a secret that contains your SSL certificate and then reference that secret in your ingress spec as explained here

Kubernetes ingress with 2 services does not always find the correct service

I have a Kubernetes cluster with a backend service and a security service.
The ingress is defined as follows:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: solidary-life
annotations:
kubernetes.io/ingress.global-static-ip-name: sl-ip
certmanager.k8s.io/acme-http01-edit-in-place: "true"
ingress.kubernetes.io/force-ssl-redirect: "true"
ingress.kubernetes.io/ssl-redirect: "true"
labels:
app: sl
spec:
rules:
- host: app-solidair-vlaanderen.com
http:
paths:
- path: /v0.0.1/*
backend:
serviceName: backend-backend
servicePort: 8080
- path: /auth/*
backend:
serviceName: security-backend
servicePort: 8080
tls:
- secretName: solidary-life-tls
hosts:
- app-solidair-vlaanderen.com
The backend service is configured like:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: backend
labels:
app: sl
spec:
template:
metadata:
labels:
app: sl
tier: web
spec:
containers:
- name: backend-app
image: gcr.io/solidary-life-218713/sv-backend:0.0.6
ports:
- name: http
containerPort: 8080
readinessProbe:
httpGet:
path: /v0.0.1/api/online
port: 8080
---
apiVersion: v1
kind: Service
metadata:
name: backend-backend
labels:
app: sl
spec:
type: NodePort
selector:
app: sl
tier: web
ports:
- port: 8080
targetPort: 8080
and the auth server service:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: security
labels:
app: sl-security
spec:
template:
metadata:
labels:
app: sl
tier: web
spec:
containers:
- name: security-app
image: gcr.io/solidary-life-218713/sv-security:0.0.1
ports:
- name: http
containerPort: 8080
- name: management
containerPort: 9090
- name: jgroups-tcp
containerPort: 7600
- name: jgroups-tcp-fd
containerPort: 57600
- name: jgroups-udp
containerPort: 55200
protocol: UDP
- name: jgroups-udp-mc
containerPort: 45688
protocol: UDP
- name: jgroups-udp-fd
containerPort: 54200
protocol: UDP
- name: modcluster
containerPort: 23364
- name: modcluster-udp
containerPort: 23365
protocol: UDP
- name: txn-recovery-ev
containerPort: 4712
- name: txn-status-mgr
containerPort: 4713
readinessProbe:
httpGet:
path: /auth/
port: 8080
---
apiVersion: v1
kind: Service
metadata:
name: security-backend
labels:
app: sl
spec:
type: NodePort
selector:
app: sl
tier: web
ports:
- port: 8080
targetPort: 8080
Now I can go to the url's:
https://app-solidair-vlaanderen.com/v0.0.1/api/online
https://app-solidair-vlaanderen.com/auth/
Sometimes this works, sometimes I get 404's. This is quite annoying and I am quite new to Kubernetes. I don't find the error.
Can it have something to do with the "sl" label that's on both the backend and security service definition?
Yes. At least that must be the start of the issue, assuming all your services are on the same Kubernetes namespace. Can you use a different label for each?
So, in essence, you have 2 services that are randomly selecting pods belonging to the security Deployment and the backend deployment. One way to determine where your service is really sending requests is by looking at its endpoints and running:
kubectl -n <your-namespace> <get or describe> ep