I am trying to create a scalable varnish cluster on some managed Kubernetes services (azure's, google's, or amazon's Kubernetes service) but I'm having trouble getting started. Any advice or references are helpful, thanks!
We (Varnish Software) are working on official Helm charts to make k8s deployments a lot easier. For the time being we only have an official Docker Image.
You can find install instructions on https://www.varnish-software.com/developers/tutorials/running-varnish-docker/.
However, I have some standalone k8s files that can be a good way to get started.
Config map
apiVersion: v1
kind: ConfigMap
metadata:
name: varnish
labels:
name: varnish
data:
default.vcl: |+
vcl 4.1;
backend default none;
sub vcl_recv {
if (req.url == "/varnish-ping") {
return(synth(200));
}
if (req.url == "/varnish-ready") {
return(synth(200));
}
return(synth(200,"Welcome"));
}
This config map contains the VCL file. This VCL file doesn't do anything useful besides having /varnish-ping & /varnish-ready endpoints. Please customize to your needs.
Service definition
Here's a basic service definition that exposes port 80
apiVersion: v1
kind: Service
metadata:
name: varnish
labels:
name: varnish
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 80
protocol: TCP
name: varnish-http
selector:
name: varnish
Deployment
And finally here's the deployment. It uses the official Varnish Docker image and more specifically the 6.0 LTS version.
It uses the synthetic /varnish-ping & /varnish-ready endpoints and mounts the config map under /etc/varnish to load the VCL file.
apiVersion: apps/v1
kind: Deployment
metadata:
name: varnish
labels:
name: varnish
spec:
replicas: 1
selector:
matchLabels:
name: varnish
template:
metadata:
labels:
name: varnish
spec:
containers:
- name: varnish
image: "varnish:stable"
imagePullPolicy: IfNotPresent
ports:
- name: http
containerPort: 80
protocol: TCP
livenessProbe:
httpGet:
path: /varnish-ping
port: 80
initialDelaySeconds: 30
periodSeconds: 5
readinessProbe:
httpGet:
path: /varnish-ready
port: 80
initialDelaySeconds: 30
periodSeconds: 5
volumeMounts:
- name: varnish
mountPath: /etc/varnish
volumes:
- name: varnish
configMap:
name: varnish
Deploying the config
Run kubectl apply -f . in the folder with the various k8s files (config map, service definition & deployment). This is the output you'll get:
$ kubectl apply -f .
configmap/varnish created
deployment.apps/varnish created
service/varnish created
By running kubectl get all you'll see the status of the deployment.
When running this on your local computer, just call kubectl port-forward service/varnish 8080:80 to port forward the Varnish service to localhost:8080. This allows you to test Varnish on k8s locally by accessing http://localhost:8080.
Run kubectl delete -f . to tear it down again.
Disclaimer
Although these configs were featured in my Varnish 6 by Example book, this is not an official tutorial. These scripts can probably be improved. However, it is a simple way to get started.
Try this Varnish on Kubernetes operator.
Related
i'm playing around with k8s services. I have created simple Spring Boot app, that display it's version number and pod name when curling endpoint:
curl localhost:9000/version
1.3_car-registry-deployment-66684dd8c4-r274b
Then i dockerized it, pushed into my local Kind cluster and deployed with 5 replicas. Next I created service targeting all 5 pods. Lastly, i exposed service like so:
kubectl port-forward svc/car-registry-service 9000:9000
Now when curling my endpoint i expected to see randomly picked pod names, but instead I only get responses from single pod. Moreover, if i kill that one pod then my service stops working, ie i'm getting ERR_EMPTY_RESPONSE, even though there are 4 more pods available. What am I missing? Here's my deployment and service yamls:
apiVersion: apps/v1
kind: Deployment
metadata:
name: car-registry-deployment
spec:
replicas: 5
selector:
matchLabels:
app: car-registry
template:
metadata:
name: car-registry
labels:
app: car-registry
spec:
containers:
- name: car-registry
image: car-registry-database:v1.3
ports:
- containerPort: 9000
protocol: TCP
name: rest
readinessProbe:
exec:
command:
- sh
- -c
- curl http://localhost:9000/healthz | grep "OK"
initialDelaySeconds: 15
periodSeconds: 5
---
apiVersion: v1
kind: Service
metadata:
name: car-registry-service
spec:
type: ClusterIP
selector:
app: car-registry
ports:
- protocol: TCP
port: 9000
targetPort: 9000
You’re using TCP, so you’re probably using keep-alive. Try to hit it with your browser or a new tty.
Try:
curl -H "Connection: close" http://your-service:port/path
Else, check kube-proxy logs to see if there’s any additional info. Your initial question doesn’t provide much detail.
We are creating a deployment in which the command needs the IP of the pre-existing service pointing to a statefulset. Below is the manifest file for the deployment. Currently, we are manually entering the service external IP inside this deployment manifest. Now we would like it to auto-populate during runtime. Is there a way to achieve this dynamically using environment variables or another way?
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-api
namespace: app-api
spec:
selector:
matchLabels:
app: app-api
replicas: 1
template:
metadata:
labels:
app: app-api
spec:
containers:
- name: app-api
image: asia-south2-docker.pkg.dev/rnd20/app-api/api:09
command: ["java","-jar","-Dallow.only.apigateway.request=false","-Dserver.port=8084","-Ddedupe.searcher.url=http://10.10.0.6:80","-Dspring.cloud.zookeeper.connect-string=10.10.0.6:2181","-Dlogging$.file.path=/usr/src/app/logs/springboot","/usr/src/app/app_api/dedupe-engine-components.jar",">","/usr/src/app/out.log"]
livenessProbe:
httpGet:
path: /health
port: 8084
httpHeaders:
- name: Custom-Header
value: ""
initialDelaySeconds: 60
periodSeconds: 60
ports:
- containerPort: 4016
resources:
limits:
cpu: 1
memory: "2Gi"
requests:
cpu: 1
memory: "2Gi"
NOTE: The IP in question here is the Internal load balancer IP, i.e. the external IP for the service and the service is in a different namespace. Below is the manifest for the same
apiVersion: v1
kind: Service
metadata:
name: app
namespace: app
annotations:
cloud.google.com/load-balancer-type: "Internal"
labels:
app: app
spec:
selector:
app: app
type: LoadBalancer
ports:
- name: container
port: 80
targetPort: 8080
protocol: TCP
You could use the following command instead:
command:
- /bin/bash
- -c
- |-
set -exuo pipefail
ip=$(dig +search +short servicename.namespacename)
exec java -jar -Dallow.only.apigateway.request=false -Dserver.port=8084 -Ddedupe.searcher.url=http://$ip:80 -Dspring.cloud.zookeeper.connect-string=$ip:2181 -Dlogging$.file.path=/usr/src/app/logs/springboot /usr/src/app/app_api/dedupe-engine-components.jar > /usr/src/app/out.log
It first resolves the ip address using dig (if you don't have dig in your image - you need to substitute it with something else you have), then execs your original java command.
As of today I'm not aware of any "native" kubernetes way to provide IP meta information directly to the pod.
If you are sure they exist before, and you deploy into the same namespace, you can read them from environment variables. It's documented here: https://kubernetes.io/docs/concepts/services-networking/service/#environment-variables.
When a Pod is run on a Node, the kubelet adds a set of environment variables for each active Service. It adds {SVCNAME}_SERVICE_HOST and {SVCNAME}_SERVICE_PORT variables, where the Service name is upper-cased and dashes are converted to underscores. It also supports variables (see makeLinkVariables) that are compatible with Docker Engine's "legacy container links" feature.
For example, the Service redis-master which exposes TCP port 6379 and has been allocated cluster IP address 10.0.0.11, produces the following environment variables:
REDIS_MASTER_SERVICE_HOST=10.0.0.11
REDIS_MASTER_SERVICE_PORT=6379
REDIS_MASTER_PORT=tcp://10.0.0.11:6379
REDIS_MASTER_PORT_6379_TCP=tcp://10.0.0.11:6379
REDIS_MASTER_PORT_6379_TCP_PROTO=tcp
REDIS_MASTER_PORT_6379_TCP_PORT=6379
REDIS_MASTER_PORT_6379_TCP_ADDR=10.0.0.11
Note, those wont update after the container is started.
I am configuring a statefulset deploying 2 Jira DataCenter nodes. The statefulset results in 2 pods. Everything seems fine until the 2 pods try to connect to eachother. They do this with their short hostname being jira-0 and jira-1.
The jira-1 pod reports UnknownHostException when connecting to jira-0. The hostname can not be resolved.
I read about adding a headless service which I didn't have yet. After adding that I can resolve the FQDN but still no luck for the short name.
Then I read this page: DNS for Services and Pods and added:
dnsConfig:
searches:
- jira.default.svc.cluster.local
That solves my issue but I think it shouldn't be necessary to add this?
Some extra info:
Cluster on AKS with CoreDNS
Kubernetes v1.19.9
Network plugin: Kubenet
Network policy: none
My full yaml file:
apiVersion: v1
kind: Service
metadata:
name: jira
labels:
app: jira
spec:
clusterIP: None
selector:
app: jira
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: jira
spec:
serviceName: jira
replicas: 0
selector:
matchLabels:
app: jira
template:
metadata:
labels:
app: jira
spec:
containers:
- name: jira
image: atlassian/jira-software:8.12.2-jdk11
readinessProbe:
httpGet:
path: /jira/status
port: 8080
initialDelaySeconds: 120
periodSeconds: 10
livenessProbe:
httpGet:
path: /jira/
port: 8080
initialDelaySeconds: 600
periodSeconds: 10
envFrom:
– configMapRef:
name: jira-config
ports:
- containerPort: 8080
dnsConfig:
searches:
- jira.default.svc.cluster.local
That solves my issue but I think it shouldn't be necessary to add this?
From the StatefulSet documentation:
StatefulSets currently require a Headless Service to be responsible for the network identity of the Pods. You are responsible for creating this Service.
The example above will create three Pods named web-0,web-1,web-2. A StatefulSet can use a Headless Service to control the domain of its Pods.
The pod-identity is will be subdomain to the governing service, eg. in your case it will be e.g:
jira-0.jira.default.svc.cluster.local
jira-1.jira.default.svc.cluster.local
I am trying to setup a dev environment using minikube (On Mac, VirtualBox) with a custom webapp and grafana. I just can not seem to make the nginx ingress controller + grafana root_url option work together.
Since Grafana, nginx and minikube are such popular tools, I must be overlooking something and I am completely out of ideas/luck. I would really really appreciate if someone can help me out here. I have spent quite a bit of time on this.
The host port 32080 is n/w mapped in Virtualbox to minikube port 32080
my-ingress-demo.com is added as 127.0.0.1 in /etc/hosts on my Mac. When I go to http://my-ingress-demo.com:32080/grafana/ I keep hitting the error (browser type: Chrome) that says
If you're seeing this Grafana has failed to load its application files
1. This could be caused by your reverse proxy settings.
2. If you host grafana under subpath make sure your grafana.ini root_url setting includes subpath
3. If you have a local dev build make sure you build frontend using: yarn start, yarn start:hot, or yarn build
4. Sometimes restarting grafana-server can help`
First I installed nginx ingress controller using helm
helm install stable/nginx-ingress --name r1 --set controller.service.nodePorts.http=32080 --set controller.service.type=NodePort --set controller.service.nodePort=8080 --namespace default
My grafana deployment is
apiVersion: apps/v1
kind: Deployment
metadata:
name: grafana-demo
spec:
replicas: 1
selector:
matchLabels:
podLabel: GrafanaDemoPod
template:
metadata:
labels:
podLabel: GrafanaDemoPod
spec:
containers:
- name: grafana-demo
image: "grafana/grafana:6.1.4"
imagePullPolicy: Always
env:
- name: GF_SERVER_ROOT_URL
value: "%(protocol)s://%(domain)s:/grafana"
- name: GF_SERVER_DOMAIN
value: "my-ingress-demo.com"
ports:
- name: grafana-cntrprt
containerPort: 3000
protocol: TCP
livenessProbe:
failureThreshold: 10
httpGet:
path: /api/health
port: grafana-cntrprt
initialDelaySeconds: 60
timeoutSeconds: 30
readinessProbe:
httpGet:
path: /api/health
port: grafana-cntrprt
The corresponding service is defined as follows
apiVersion: v1
kind: Service
metadata:
name: grafana-demo-svc
labels:
svc_category: front-end
spec:
selector:
podLabel: GrafanaDemoPod
ports:
- name: grafana-svcport
port: 3000
targetPort: 3000
My ingress is defined as follows
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: "nginx"
name: grafana-app-ingress
spec:
rules:
- host: my-ingress-demo.com
http:
paths:
- path: "/grafana/"
backend:
serviceName: grafana-demo-svc
servicePort: grafana-svcport
I was expecting to see the login screen but I keep getting 404s. If I try to include the common rewrite rule in ingress config, i get too many redirects error
tried with even simpler setup of using docker-compose but still hitting the same issue. I must be doing something really stupid here
version: '3.3'
services:
grafana:
image: "grafana/grafana"
container_name: 'grafanaxxx'
ports:
- '3000:3000'
environment:
- GF_SERVER_ROOT_URL=%(protocol)s://%(domain)s/grafana/
Try using this (see trailing slash)
GF_SERVER_ROOT_URL=%(protocol)s://%(domain)s/grafana/
And this path in Nginx:
paths:
- path: "/grafana"
When I create a GCE ingress, Google Load Balancer does not set the health check from the readiness probe. According to the docs (Ingress GCE health checks) it should pick it up.
Expose an arbitrary URL as a readiness probe on the pods backing the Service.
Any ideas why?
Deployment:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: frontend-prod
labels:
app: frontend-prod
spec:
selector:
matchLabels:
app: frontend-prod
replicas: 3
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
labels:
app: frontend-prod
spec:
imagePullSecrets:
- name: regcred
containers:
- image: app:latest
readinessProbe:
httpGet:
path: /healthcheck
port: 3000
initialDelaySeconds: 15
periodSeconds: 5
name: frontend-prod-app
- env:
- name: PASSWORD_PROTECT
value: "1"
image: nginx:latest
readinessProbe:
httpGet:
path: /health
port: 80
initialDelaySeconds: 5
periodSeconds: 5
name: frontend-prod-nginx
Service:
apiVersion: v1
kind: Service
metadata:
name: frontend-prod
labels:
app: frontend-prod
spec:
type: NodePort
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
selector:
app: frontend-prod
Ingress:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: frontend-prod-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: frontend-prod-ip
spec:
tls:
- secretName: testsecret
backend:
serviceName: frontend-prod
servicePort: 80
So apparently, you need to include the container port on the PodSpec.
Does not seem to be documented anywhere.
e.g.
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
Thanks, Brian! https://github.com/kubernetes/ingress-gce/issues/241
This is now possible in the latest GKE (I am on 1.14.10-gke.27, not sure if that matters)
Define a readinessProbe on your container in your Deployment.
Recreate your Ingress.
The health check will point to the path in readinessProbe.httpGet.path of the Deployment yaml config.
Update by Jonathan Lin below: This has been fixed very recently. Define a readinessProbe on the Deployment. Recreate your Ingress. It will pick up the health check path from the readinessProbe.
GKE Ingress health check path is currently not configurable. You can go to http://console.cloud.google.com (UI) and visit Load Balancers list to see the health check it uses.
Currently the health check for an Ingress is GET / on each backend: specified on the Ingress. So all your apps behind a GKE Ingress must return HTTP 200 OK to GET / requests.
That said, the health checks you specified on your Pods are still being used ––by the kubelet to make sure your Pod is actually functioning and healthy.
Google has recently added support for CRD that can configure your Backend Services along with healthchecks:
apiVersion: cloud.google.com/v1beta1
kind: BackendConfig
metadata:
name: backend-config
namespace: prod
spec:
healthCheck:
checkIntervalSec: 30
port: 8080
type: HTTP #case-sensitive
requestPath: /healthcheck
See here.
Another reason why Google Cloud Load Balancer does not pick-up GCE health check configuration from Kubernetes Pod readiness probe could be that the service is configured as "selectorless" (the selector attribute is empty and you manage endpoints directly).
This is the case with e.g. kube-lego: see https://github.com/jetstack/kube-lego/issues/68#issuecomment-303748457 and https://github.com/jetstack/kube-lego/issues/68#issuecomment-327457982.
Original question does have selector specified in the service, so this hint doesn't apply. This hints serves visitors that have the same problem with a different cause.