Expose a redis cluster - with a kubernetes statefulset to the internet - kubernetes

I created a statefulset that deploys a redis image to GCP on kubernetes. The challenge I am having is exposing it using a single domain name. Such that the pods can be accessed in the following order - redis.com/first, redis.com/second, redis.com/third
here are the YAML files
Statefulset
kind: StatefulSet
metadata:
name: app-redis
spec:
selector:
matchLabels:
app: apprenticeship-redis
serviceName: 'redis-service'
replicas: 3
template:
metadata:
labels:
app: app-redis
spec:
terminationGracePeriodSeconds: 10
containers:
- name: app-redis
image: redis
args:
- /etc/redis/redis.conf
volumeMounts:
- mountPath: /etc/redis
name: redis-config
readOnly: false
- name: redis-storage
mountPath: /data
readOnly: false
resources:
requests:
cpu: 50m
memory: 128Mi
limits:
cpu: 150m
memory: 256Mi
ports:
- containerPort: 6379
name: redis
livenessProbe:
exec:
command: ['redis-cli', 'ping']
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 2
volumes:
- name: redis-config
configMap:
name: redis-config
volumeClaimTemplates:
- metadata:
name: redis-storage
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
Headless service
apiVersion: v1
kind: Service
metadata:
labels:
app: app-redis
name: redis-service
namespace: default
spec:
ports:
- name: server-port
port: 80
protocol: TCP
targetPort: 6379
clusterIP: None
selector:
statefulset.kubernetes.io/pod-name: app-redis-0
Loadbalancer
apiVersion: v1
kind: Service
metadata:
labels:
app: redis-service
name: app-redis
spec:
externalTrafficPolicy: Local
ports:
- port: 80
protocol: TCP
targetPort: 6379
selector:
app: app-redis
type: LoadBalancer
loadBalancerIP: xx.xx.xx.xxx
status:
loadBalancer:
ingress:
- ip: xx.xx.xx.xxx
Config map
apiVersion: v1
kind: ConfigMap
metadata:
name: redis-config
namespace: default
data:
redis.conf: |
dbfilename "dump.rdb"
dir /data
save 3600 1
save 300 10
save 60 100
appendonly yes
appendfilename "appendonly.aof"
Storage class
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: redis-storage
provisioner: kubernetes.io/gce-pd
Ingress
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: redis-ingress
annotations:
kubernetes.io/ingress.class: 'nginx'
nginx.ingress.kubernetes.io/force-ssl-redirect: 'false'
spec:
rules:
- host: app-redis.tk
http:
paths:
- path: /
backend:
serviceName: app-redis
servicePort: 80

Each pod in the StatefulSet will need to have a service linking to it.
This service will need to be created with:
selector:
statefulset.kubernetes.io/pod-name: <POD_NAME>
Then you will be able to set ingress and use it to redirect traffic based on path:
...
spec:
rules:
- http:
paths:
- path: /app-redis-0
backend:
serviceName: redis-service-0
servicePort: 6379
- path: /app-redis-1
backend:
serviceName: redis-service-1
servicePort: 6379
- path: /app-redis-2
backend:
serviceName: redis-service-2
servicePort: 6379
...
You can read about Exposing StatefulSets in Kubernetes and Kubernetes NodePort vs LoadBalancer vs Ingress? When should I use what?

Related

Why does my Ingress.yaml file not expose my containerised frontend?

I'm using microk8s to run my containerised frontend in a k8s cluster. However when I try to access it, I get a 'site can't be reached' error. I first tested it out in minikube with minikube tunnel and that works. What am I doing wrong here?
Note: I've enabled the ingress addon in microk8s with microk8s enable ingress.
ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: http-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: mf1
port:
number: 80
service.yaml
apiVersion: v1
kind: Service
metadata:
name: mf1
spec:
type: ClusterIP
ports:
- port: 80
selector:
app: mf1
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: mf1
spec:
replicas: 1
selector:
matchLabels:
app: mf1
template:
metadata:
labels:
app: mf1
spec:
nodeSelector:
"kubernetes.io/os": linux
containers:
- name: mf1
image: nginx:latest
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 250m
memory: 256Mi
ports:
- containerPort: 80
name: redis
imagePullPolicy: Always

Kubernetes Ingress can not access container by path

I am new to Kubernetes, I configured a Ingress and want to access container by minikube ip/path, but it failed to connect.
However, I could access it by using host instead of path, so I thought the problem might be Ingress.
I have no idea how to do, hope someone can help me. Thanks.
Here's my Deployment, Service and Ingress yaml file.
apiVersion: apps/v1
kind: Deployment
metadata:
name: portainer-deployment
spec:
replicas: 2
selector:
matchLabels:
app: portainer
template:
metadata:
labels:
app: portainer
spec:
containers:
- name: portainer
image: portainer/portainer:latest
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 9000
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: rancher-deployment
spec:
replicas: 2
selector:
matchLabels:
app: rancher
template:
metadata:
labels:
app: rancher
spec:
containers:
- name: rancher
image: rancher/server:latest
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- containerPort: 8080
apiVersion: v1
kind: Service
metadata:
name: portainer-service
spec:
selector:
app: portainer
type: NodePort
ports:
- port: 80
targetPort: 9000
---
apiVersion: v1
kind: Service
metadata:
name: rancher-service
spec:
selector:
app: rancher
type: NodePort
ports:
- port: 80
targetPort: 8080
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: path-based-ingress
spec:
rules:
- http:
paths:
- path: /portainer
backend:
serviceName: portainer-service
servicePort: 80
- path: /rancher
backend:
serviceName: rancher-service
servicePort: 80
Add this to your ingress in metadata section
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /

How to setup HTTPS load balancer in kubernetes

I have a requirement to make my application to support the request over https and block the http port.I want to use certificate provided my company so do i need the jks certs or some other type. Im not sure how to make it https in gke. I have seen couple of documentation but they are not clear.This is my current kubernetes deployment file.Please let me know how can i configure it.
apiVersion: v1
kind: Service
metadata:
name: oms-integeration-service
spec:
type: NodePort
ports:
- port: 80
targetPort: 8081
protocol: TCP
name: http
selector:
app: integeration
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: integeration
spec:
replicas: 2
template:
metadata:
labels:
app: integeration
spec:
containers:
- name: esp
image: gcr.io/endpoints-release/endpoints-runtime:1
args: [
"--http_port=8081",
"--backend=127.0.0.1:8080",
"--service=oms.endpoints.gcp-dsw-oms-int-{{env}}.cloud.goog",
"--rollout_strategy=managed",
]
- name: integeration-container
image: us.gcr.io/gcp-dsw-oms-int-{{env}}/gke/oms-integ-service:{{tag}}
readinessProbe:
httpGet:
path: /healthcheck
port: 8080
initialDelaySeconds: 60
periodSeconds: 10
ports:
- containerPort: 8080
resources:
requests:
memory: 500M
env:
- name: LOGGING_FILE
value: "integeration-container"
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: integeration-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: "oms-int-ip"
kubernetes.io/ingress.class: "gce"
rules:
- host: "oms.endpoints.gcp-dsw-oms-int-{{env}}.cloud.goog"
http:
paths:
- path: /*
backend:
serviceName: oms-integeration-service
servicePort: 80
You have to create a secret that contains your SSL certificate and then reference that secret in your ingress spec as explained here

Deployed port is not getting exposed in kubernetes in nexus

I am working on creating a nexus repo through kubernetes. By browesing i came across this site (https://blog.sonatype.com/kubernetes-recipe-sonatype-nexus-3-as-a-private-docker-registry) I was able to create a service and deployment and pod and everthing created without am issue. But i was not able to open it.
This is my nexus.yaml file
apiVersion: v1
kind: Namespace
metadata:
name: nexus
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nexusvolume-local
namespace: nexus
labels:
app: nexus
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nexus
namespace: nexus
spec:
replicas: 1
template:
metadata:
labels:
app: nexus
spec:
containers:
- image: sonatype/nexus3
imagePullPolicy: Always
name: nexus
ports:
- containerPort: 8081
- containerPort: 5000
volumeMounts:
- mountPath: /nexus-data
name: nexus-data-volume
volumes:
- name: nexus-data-volume
persistentVolumeClaim:
claimName: nexusvolume-local
---
apiVersion: v1
kind: Service
metadata:
name: nexus-service
namespace: nexus
spec:
ports:
- port: 80
targetPort: 8081
protocol: TCP
name: http
- port: 5000
targetPort: 5000
protocol: TCP
name: docker
selector:
app: nexus
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nexus-ingress
namespace: nexus
annotations:
ingress.kubernetes.io/proxy-body-size: 100m
kubernetes.io/tls-acme: "true"
kubernetes.io/ingress.class: "nginx"
spec:
tls:
- hosts:
# CHANGE ME
- docker.testnexusurl.com
- nexus.testnexusurl.com
secretName: nexus-tls
rules:
# CHANGE ME
- host: nexus.testnexusurl.com
http:
paths:
- path: /
backend:
serviceName: nexus-service
servicePort: 80
# CHANGE ME
- host: docker.testnexusurl.com
http:
paths:
- path: /
backend:
serviceName: nexus-service
servicePort: 5000
When i give
kubectl describe service nexus-service -n nexus
Name: nexus-service
Namespace: nexus
Labels: <none>
Annotations: <none>
Selector: app=nexus
Type: ClusterIP
IP: 10.111.212.3
Port: http 80/TCP
TargetPort: 8081/TCP
Endpoints: 172.17.0.11:8081
Port: docker 5000/TCP
TargetPort: 5000/TCP
Endpoints: 172.17.0.11:5000
Session Affinity: None
Events: <none>
I tried by giving the port and opening but i am getting a error it can reached.
Can someone help me on this???
Thanks in advance.
After looking at the above yml file, there you have written namespace as "nexus", try running by changing namespace to "default".

Unhealthy load balancer on GCE

I have a couple of services and the loadbalancers work fine. Now I keep facing an issue with a service that runs fine, but when a loadbalancer is applied I cannot get it to work, because one service seams to be unhealty, but I cannot figure out why. How can I get that service healthy?
Here are my k8s yaml.
Deployment:
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: api-production
spec:
replicas: 1
template:
metadata:
name: api
labels:
app: api
role: backend
env: production
spec:
containers:
- name: api
image: eu.gcr.io/foobar/api:1.0.0
livenessProbe:
httpGet:
path: /readinez
port: 8080
initialDelaySeconds: 45
periodSeconds: 10
readinessProbe:
httpGet:
path: /healthz
port: 8080
env:
- name: ENVIRONMENT
value: "production"
- name: GIN_MODE
value: "release"
resources:
limits:
memory: "500Mi"
cpu: "100m"
imagePullPolicy: Always
ports:
- name: api
containerPort: 8080
Service.yaml
kind: Service
apiVersion: v1
metadata:
name: api
spec:
selector:
app: api
role: backend
type: NodePort
ports:
- name: http
port: 8080
- name: external
port: 80
targetPort: 80
Ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: api
namespace: production
annotations:
kubernetes.io/tls-acme: "true"
kubernetes.io/ingress.class: "gce"
spec:
tls:
- hosts:
- foo.bar.io
secretName: api-tls
rules:
- host: foo.bar.io
http:
paths:
- path: /*
backend:
serviceName: api
servicePort: 80
The problem was solved by configuring the ports in the correct way. Container, Service and LB need (obviously) to be aligned. I also added the initialDelaySeconds.
LB:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: api
namespace: production
annotations:
# kubernetes.io/ingress.allow-http: "false"
kubernetes.io/tls-acme: "true"
kubernetes.io/ingress.class: "gce"
spec:
tls:
- hosts:
- api.foo.io
secretName: api-tls
rules:
- host: api.foo.io
http:
paths:
- path: /*
backend:
serviceName: api
servicePort: 8080
Service:
kind: Service
apiVersion: v1
metadata:
name: api
spec:
selector:
app: api
role: backend
type: NodePort
ports:
- protocol: TCP
port: 8080
targetPort: 8080
name: http
Deployment:
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: api-production
spec:
replicas: 1
template:
metadata:
name: api
labels:
app: api
role: backend
env: production
spec:
containers:
- name: api
image: eu.gcr.io/foobarbar/api:1.0.0
livenessProbe:
httpGet:
path: /readinez
port: 8080
initialDelaySeconds: 45
periodSeconds: 10
readinessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 45
env:
- name: ENVIRONMENT
value: "production"
- name: GIN_MODE
value: "release"
resources:
limits:
memory: "500Mi"
cpu: "100m"
imagePullPolicy: Always
ports:
- containerPort: 8080