I want to use ingress with Haproxy in my kuberenets cluster, how should i use it?
I am have tried using it on my local system, I have used the HAproxy ingress controller in different namespace but I am getting 503 error randomly for the haproxy pod which has been created.
try this
default backend
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
run: ingress-default-backend
name: ingress-default-backend
spec:
replicas: 1
selector:
matchLabels:
run: ingress-default-backend
template:
metadata:
labels:
run: ingress-default-backend
spec:
containers:
- name: ingress-default-backend
image: gcr.io/google_containers/defaultbackend:1.0
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
labels:
run: ingress-default-backend
name: ingress-default-backend
spec:
ports:
- name: port-1
port: 8080
protocol: TCP
targetPort: 8080
selector:
run: ingress-default-backend
haproxy ingress controller
apiVersion: v1
data:
dynamic-scaling: "true"
backend-server-slots-increment: "4"
kind: ConfigMap
metadata:
name: haproxy-configmap
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
run: haproxy-ingress
name: haproxy-ingress
spec:
replicas: 1
selector:
matchLabels:
run: haproxy-ingress
template:
metadata:
labels:
run: haproxy-ingress
spec:
containers:
- name: haproxy-ingress
image: quay.io/jcmoraisjr/haproxy-ingress
args:
- --default-backend-service=default/ingress-default-backend
- --default-ssl-certificate=default/tls-secret
- --configmap=$(POD_NAMESPACE)/haproxy-configmap
- --reload-strategy=native
ports:
- name: http
containerPort: 80
- name: https
containerPort: 443
- name: stat
containerPort: 1936
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
---
apiVersion: v1
kind: Service
metadata:
labels:
run: haproxy-ingress
name: haproxy-ingress
spec:
externalIPs:
- 172.17.0.50
ports:
- name: port-1
port: 80
protocol: TCP
targetPort: 80
- name: port-2
port: 443
protocol: TCP
targetPort: 443
- name: port-3
port: 1936
protocol: TCP
targetPort: 1936
selector:
run: haproxy-ingress
update externalIPs as per your environment
Related
I am new to Kubernetes deployment.
I CAN connect to the kafka cluster from outside, BUT am not able to do the same from WITHIN the cluster.
Here are my configurations:
kafka-broker:
---
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: crd-instrument
name: kafkaservice
spec:
replicas: 1
selector:
matchLabels:
app: kafkaservice
template:
metadata:
labels:
app: kafkaservice
spec:
hostname: kafkaservice
containers:
- name: kafkaservice
imagePullPolicy: IfNotPresent
image: wurstmeister/kafka
env:
- name: KAFKA_BROKER_ID
value: "1"
- name: KAFKA_LISTENER_SECURITY_PROTOCOL_MAP
value: "INTERNAL_PLAINTEXT:PLAINTEXT,EXTERNAL_PLAINTEXT:PLAINTEXT"
- name: KAFKA_LISTENERS
value: "INTERNAL_PLAINTEXT://0.0.0.0:9092,EXTERNAL_PLAINTEXT://0.0.0.0:9093"
- name: KAFKA_ADVERTISED_LISTENERS
value: "INTERNAL_PLAINTEXT://kafkaservice:9092,EXTERNAL_PLAINTEXT://127.0.0.1:30035"
- name: KAFKA_INTER_BROKER_LISTENER_NAME
value: "INTERNAL_PLAINTEXT"
- name: KAFKA_CREATE_TOPICS
value: "crd_instrument_req:1:1,crd_instrument_resp:1:1,crd_instrument_resol:1:1"
- name: KAFKA_ZOOKEEPER_CONNECT
value: "zookeeperservice:2181"
ports:
- name: port9092
containerPort: 9092
- name: port9093
containerPort: 9093
kafka-service:
---
apiVersion: v1
kind: Service
metadata:
namespace: crd-instrument
name: kafkaservice
labels:
app: kafkaservice
spec:
selector:
app: kafkaservice
ports:
- name: port9092
port: 9092
targetPort: 9092
protocol: TCP
Kafka-service-external
---
apiVersion: v1
kind: Service
metadata:
namespace: crd-instrument
name: kafkaservice-external
labels:
app: kafkaservice-external
spec:
selector:
app: kafkaservice
ports:
- name: port9093
port: 9093
protocol: TCP
nodePort: 30035
type: NodePort
This is the client yml thats started in the same namespace:
---
apiVersion: v1
kind: Pod
metadata:
name: crd-instrument-client
labels:
app: crd-instrument-client
namespace: crd-instrument
spec:
containers:
- name: crd-instrument-client
image: crd_instrument_client:1.0
imagePullPolicy: Never
The code within is trying to connect with BOOTSTRAP_SERVERS as "kafkaservice:9092"
Its not connecting. Where am I going wrong, if someone can help pointing out please..
I am trying to deploy a simple app on google cloud. I am testing the gitlab kluster integration.
Here is my yaml k8:
---
apiVersion: v1
kind: Service
metadata:
name: service
namespace: "my-service"
labels:
run: service
spec:
type: NodePort
selector:
run: "service"
ports:
- port: 9000
targetPort: 9000
protocol: TCP
name: http
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: service-api
namespace: "my-service"
labels:
run: service
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: service-v1
namespace: "my-service"
labels:
run: service
spec:
replicas: 1
selector:
matchLabels:
run: service
template:
metadata:
labels:
run: service
spec:
serviceAccountName: service-api
containers:
- name: service
image: "gcr.io/test/service:latest"
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9000
protocol: TCP
volumeMounts:
- name: test
mountPath: /usr/test
volumes:
- name: test
emptyDir: {}
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: "service-ingress"
namespace: "my-service"
labels:
run: service
spec:
rules:
- http:
paths:
- path: /*
backend:
serviceName: service
servicePort: 9000
If I log into the pod I can curl the service on the nodePort designed IP, but if I try to hit the ingress address I just get a error:
I am not sure why there are 2 backend services on the loadbalancer that is created automatically, the one that points to my app shows as unhealthy
[load balancer backends1
You need to define readiness probe in your pod spec because GKE ingress controller picks up health check from the readiness probe.
apiVersion: apps/v1
kind: Deployment
metadata:
name: service-v1
namespace: "my-service"
labels:
run: service
spec:
replicas: 1
selector:
matchLabels:
run: service
template:
metadata:
labels:
run: service
spec:
serviceAccountName: service-api
containers:
- name: service
image: "gcr.io/test/service:latest"
imagePullPolicy: IfNotPresent
readinessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 3
periodSeconds: 3
ports:
- containerPort: 9000
protocol: TCP
volumeMounts:
- name: test
mountPath: /usr/test
volumes:
- name: test
emptyDir: {}
I'm well versed in Docker, but must be doing something wrong here with K8. I'm running skaffold with minikube and trying to get DNS between containers working. Here's my deployment:
kind: Deployment
apiVersion: apps/v1
metadata:
name: my-api
labels:
app: my-api
spec:
replicas: 1
selector:
matchLabels:
app: my-api
template:
metadata:
name: my-api
labels:
app: my-api
spec:
containers:
- name: my-api-postgres
image: postgres:11.2-alpine
env:
- name: POSTGRES_USER
value: "my-api"
- name: POSTGRES_DB
value: "my-api"
- name: POSTGRES_PASSWORD
value: "my-pass"
ports:
- containerPort: 5432
- name: my-api-redis
image: redis:5.0.4-alpine
command: ["redis-server"]
args: ["--appendonly", "yes"]
ports:
- containerPort: 6379
- name: my-api-node
image: my-api-node
command: ["npm"]
args: ["run", "start-docker-dev"]
ports:
- containerPort: 3000
However, in this scenario my-api-node can't contact my-api-postgres via the DNS hostname my-api-postgres. Any idea what I'm doing wrong?
You have defined all 3 containers as part of the same pod. Pods have a common network namespace so in your current setup (which is not correct, more on that in a second), you could talk to the other containers using localhost:<port>.
The 'correct' way of doing this would be to create a deployment for each application, and front those deployments with services.
Your example would roughly become (untested):
kind: Deployment
apiVersion: apps/v1
metadata:
name: my-api-node
namespace: my-api
labels:
app: my-api-node
spec:
replicas: 1
selector:
matchLabels:
app: my-api-node
template:
metadata:
name: my-api-node
labels:
app: my-api-node
spec:
containers:
- name: my-api-node
image: my-api-node
command: ["npm"]
args: ["run", "start-docker-dev"]
ports:
- containerPort: 3000
---
apiVersion: v1
kind: Service
metadata:
namespace: my-api
name: my-api-node
spec:
selector:
app: my-api-node
ports:
- protocol: TCP
port: 3000
targetPort: 3000
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: my-api-redis
namespace: my-api
labels:
app: my-api-redis
spec:
replicas: 1
selector:
matchLabels:
app: my-api-redis
template:
metadata:
name: my-api-redis
labels:
app: my-api-redis
spec:
containers:
- name: my-api-redis
image: redis:5.0.4-alpine
command: ["redis-server"]
args: ["--appendonly", "yes"]
ports:
- containerPort: 6379
---
apiVersion: v1
kind: Service
metadata:
namespace: my-api
name: my-api-redis
spec:
selector:
app: my-api-redis
ports:
- protocol: TCP
port: 6379
targetPort: 6379
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: my-api-postgres
namespace: my-api
labels:
app: my-api-postgres
spec:
replicas: 1
selector:
matchLabels:
app: my-api-postgres
template:
metadata:
name: my-api-postgres
labels:
app: my-api-postgres
spec:
containers:
- name: my-api-postgres
image: postgres:11.2-alpine
env:
- name: POSTGRES_USER
value: "my-api"
- name: POSTGRES_DB
value: "my-api"
- name: POSTGRES_PASSWORD
value: "my-pass"
ports:
- containerPort: 5432
---
apiVersion: v1
kind: Service
metadata:
namespace: my-api
name: my-api-postgres
spec:
selector:
app: my-api-postgres
ports:
- protocol: TCP
port: 5432
targetPort: 5432
DNS records get registered for services so you are connecting to those and being forwarded to the pods behind it (simplified). If you need to get to your node app from the outside world, that's a whole additional deal, and you should look at LoadBalancer type services, or Ingress.
As an addition to johnharris85 DNS, when you will separate your apps, which you should do in your scenario.
Multi-container Pods are usually used in specific use cases, like for example sidecar containers to help the main container with some particular tasks or proxies, bridges and adapters to for example provide connectivity to some specific destination.
In your case you can easily separate them. In this case you have a deployment with 1 Pod in which there are 3 containers which communicate with each other by localhost and not DNS names as already mentioned.
After which I recommend you to read about DNS inside of Kubernetes and how the communication works with the services stepping up into the game.
In case of pods you can read more here.
I've setup Kubernetes to use the Traefik Ingress to provide name based routing. I am a little lost in terms of how to configure for the automatic LetsEncrypt SSL certs. How do I reference the TOML files and configure for HTTPs. I am using a simple container below with the NGINX image to test this.
The below is my YAML for the deployment/service/ingress.
apiVersion: v1
kind: Service
metadata:
name: web
labels:
app: hmweb
spec:
ports:
- name: http
port: 80
targetPort: 80
protocol: TCP
selector:
app: hmweb
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: web-ingress
annotations:
kubernetes.io/ingress.class: traefik
spec:
rules:
- host: example.com
http:
paths:
- path: /
backend:
serviceName: web
servicePort: http
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: hmweb-deployment
labels:
app: hmweb
spec:
replicas: 1
selector:
matchLabels:
app: hmweb
template:
metadata:
labels:
app: hmweb
spec:
containers:
- name: hmweb
image: nginx:latest
envFrom:
- configMapRef:
name: config
ports:
- containerPort: 80
I have also included my ingress.yaml
--
apiVersion: v1
kind: ServiceAccount
metadata:
name: traefik-ingress-controller
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: traefik-ingress-controller
namespace: kube-system
labels:
k8s-app: traefik-ingress-lb
spec:
replicas: 1
selector:
matchLabels:
k8s-app: traefik-ingress-lb
template:
metadata:
labels:
k8s-app: traefik-ingress-lb
name: traefik-ingress-lb
spec:
serviceAccountName: traefik-ingress-controller
terminationGracePeriodSeconds: 60
containers:
- image: traefik
name: traefik-ingress-lb
ports:
- name: http
containerPort: 80
- name: admin
containerPort: 8080
args:
- --api
- --kubernetes
- --logLevel=INFO
---
kind: Service
apiVersion: v1
metadata:
name: traefik-ingress-service
namespace: kube-system
spec:
selector:
k8s-app: traefik-ingress-lb
ports:
- protocol: TCP
port: 80
name: web
- protocol: TCP
port: 8080
name: admin
type: LoadBalancer
You could build a custom image and include the toml file that way, however that would NOT be best practice. Here's how I did it:
1) Deploy your toml configuration to kubernetes as a ConfigMap like so:
apiVersion: v1
kind: ConfigMap
metadata:
name: cfg-traefik
labels:
app: traefik
data:
traefik.toml: |
defaultEntryPoints = ["http", "https"]
[entryPoints]
[entryPoints.http]
address = ":80"
[entryPoints.http.redirect]
entryPoint = "https"
[entryPoints.https]
address = ":443"
[entryPoints.https.tls]
[acme]
email = "you#email.com"
storage = "/storage/acme.json"
entryPoint = "https"
acmeLogging = true
onHostRule = true
[acme.tlsChallenge]
2) Connect the configuration to your Traefik deployment. Here's my configuration:
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: dpl-traefik
labels:
k8s-app: traefik
spec:
replicas: 1
selector:
matchLabels:
k8s-app: traefik
template:
metadata:
labels:
k8s-app: traefik
name: traefik
spec:
serviceAccountName: svc-traefik
terminationGracePeriodSeconds: 60
volumes:
- name: config
configMap:
name: cfg-traefik
- name: cert-storage
persistentVolumeClaim:
claimName: pvc-traefik
containers:
- image: traefik:alpine
name: traefik
volumeMounts:
- mountPath: "/config"
name: "config"
- mountPath: "/storage"
name: cert-storage
ports:
- name: http
containerPort: 80
- name: https
containerPort: 443
- name: admin
containerPort: 8080
args:
- --api
- --kubernetes
- --logLevel=INFO
- --configFile=/config/traefik.toml
I have a problem with my ingress and my service, I can not get that when I connect to the IP of my server, I redirect to the service I have associated with port 80, which is my website. I pass you the configuration files and the description of the ingress:
apiVersion: v1
kind: Namespace
metadata:
name: bookstack
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
service: mysql
name: mysql
namespace: bookstack
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
service: mysql
spec:
containers:
- env:
- name: MYSQL_DATABASE
value: bookstack
- name: MYSQL_PASS
value: pass
- name: MYSQL_ROOT_PASSWORD
value: root
- name: MYSQL_USER
value: user
image: mysql:5.7
name: mysql
ports:
- containerPort: 3306
restartPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
labels:
service: mysql
name: mysql
namespace: bookstack
spec:
type: NodePort
ports:
- name: "3306"
port: 3306
targetPort: 3306
selector:
service: mysql
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: bookstack
name: bookstack
namespace: bookstack
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
app: bookstack
spec:
containers:
- env:
- name: namespace
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: podname
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: nodename
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: DB_DATABASE
value: bookstack
- name: DB_HOST
value: mysql
- name: DB_PASSWORD
value: root
- name: DB_USERNAME
value: root
image: solidnerd/bookstack:latest
name: bookstack
ports:
- name: http
containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
labels:
app: bookstack
name: bookstack
namespace: bookstack
spec:
type: NodePort
ports:
- name: http-port
port: 80
protocol: TCP
selector:
app: bookstack
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: http
namespace: bookstack
spec:
backend:
serviceName: bookstack
servicePort: http-port
This is what appears on my ingress:
Name: http
Namespace: bookstack
Address:
Default backend: bookstack:http-port (10.36.0.22:80)
Rules:
Host Path Backends
---- ---- --------
* * bookstack:http-port (10.36.0.22:80)
Annotations:
kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{},"name":"http","namespace":"bookstack"},"spec":{"backend":{"serviceName":"bookstack","servicePort":"http-port"}}}
Events: <none>
It doesn't return any external IP to connect me, why could it be? I want to avoid using LoadBalancer as a service type.
The main problem was that I didn't have activated the balancer that Google Kubernetes Engine offers by default, not having it active I couldn't generate an external ip because there wasn't a balancer. There are two solutions, either activate GKE's default load balancer or create a type of service: LoadBalancer.
Important to activate also within the deploy the readinessProbe
and livenessProbe.
An example:
readinessProbe:
httpGet:
path: /login
port: 80
initialDelaySeconds: 5
timeoutSeconds: 1
periodSeconds: 15
livenessProbe:
httpGet:
path: /login
port: 80
initialDelaySeconds: 15
timeoutSeconds: 1
periodSeconds: 15
There wouldn't be an external IP specifically because NodePort represents all the nodes on your cluster on that specific port. So, essentially you would have to point an external load balancer or that traffic source to each of the nodes on your cluster on that specific NodePort.
Note that if you are using ExternalTrafficPolicy=Local only the nodes that have pods for your service will reply.