Microk8s + metallb + ingress - kubernetes

Im quite new to kubernetes and Im trying to set up a microk8s test environment on a VPS with CentOS.
What I did:
I set up the cluster, enabled the ingress and metallb
microk8s enable ingress
microk8s enable metallb
Exposed the ingress-controller service:
apiVersion: v1
kind: Service
metadata:
name: ingress
namespace: ingress
spec:
type: LoadBalancer
selector:
name: nginx-ingress-microk8s
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
- name: https
protocol: TCP
port: 443
targetPort: 443
Exposed an nginx deployment to test the ingress
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
run: nginx
name: nginx-deploy
spec:
replicas: 1
selector:
matchLabels:
run: nginx-deploy
template:
metadata:
labels:
run: nginx-deploy
spec:
containers:
- image: nginx
name: nginx
This is the status of my cluster:
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system pod/hostpath-provisioner-5c65fbdb4f-m2xq6 1/1 Running 3 41h
kube-system pod/coredns-86f78bb79c-7p8bs 1/1 Running 3 41h
kube-system pod/calico-node-g4ws4 1/1 Running 6 42h
kube-system pod/calico-kube-controllers-847c8c99d-xhmd7 1/1 Running 4 42h
kube-system pod/metrics-server-8bbfb4bdb-ggvk7 1/1 Running 0 41h
kube-system pod/kubernetes-dashboard-7ffd448895-ktv8j 1/1 Running 0 41h
kube-system pod/dashboard-metrics-scraper-6c4568dc68-l4xmg 1/1 Running 0 41h
container-registry pod/registry-9b57d9df8-xjh8d 1/1 Running 0 38h
cert-manager pod/cert-manager-cainjector-5c6cb79446-vv5j2 1/1 Running 0 12h
cert-manager pod/cert-manager-794657589-srrmr 1/1 Running 0 12h
cert-manager pod/cert-manager-webhook-574c9758c9-9dwr6 1/1 Running 0 12h
metallb-system pod/speaker-9gjng 1/1 Running 0 97m
metallb-system pod/controller-559b68bfd8-trk5z 1/1 Running 0 97m
ingress pod/nginx-ingress-microk8s-controller-f6cdb 1/1 Running 0 65m
default pod/nginx-deploy-5797b88878-vgp7x 1/1 Running 0 20m
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default service/kubernetes ClusterIP 10.152.183.1 <none> 443/TCP 42h
kube-system service/kube-dns ClusterIP 10.152.183.10 <none> 53/UDP,53/TCP,9153/TCP 41h
kube-system service/metrics-server ClusterIP 10.152.183.243 <none> 443/TCP 41h
kube-system service/kubernetes-dashboard ClusterIP 10.152.183.225 <none> 443/TCP 41h
kube-system service/dashboard-metrics-scraper ClusterIP 10.152.183.109 <none> 8000/TCP 41h
container-registry service/registry NodePort 10.152.183.44 <none> 5000:32000/TCP 38h
cert-manager service/cert-manager ClusterIP 10.152.183.183 <none> 9402/TCP 12h
cert-manager service/cert-manager-webhook ClusterIP 10.152.183.99 <none> 443/TCP 12h
echoserver service/echoserver ClusterIP 10.152.183.110 <none> 80/TCP 72m
ingress service/ingress LoadBalancer 10.152.183.4 192.168.0.11 80:32617/TCP,443:31867/TCP 64m
default service/nginx-deploy ClusterIP 10.152.183.149 <none> 80/TCP 19m
NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
kube-system daemonset.apps/calico-node 1 1 1 1 1 kubernetes.io/os=linux 42h
metallb-system daemonset.apps/speaker 1 1 1 1 1 beta.kubernetes.io/os=linux 97m
ingress daemonset.apps/nginx-ingress-microk8s-controller 1 1 1 1 1 <none> 65m
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
kube-system deployment.apps/hostpath-provisioner 1/1 1 1 41h
kube-system deployment.apps/coredns 1/1 1 1 41h
kube-system deployment.apps/calico-kube-controllers 1/1 1 1 42h
kube-system deployment.apps/metrics-server 1/1 1 1 41h
kube-system deployment.apps/dashboard-metrics-scraper 1/1 1 1 41h
kube-system deployment.apps/kubernetes-dashboard 1/1 1 1 41h
container-registry deployment.apps/registry 1/1 1 1 38h
cert-manager deployment.apps/cert-manager-cainjector 1/1 1 1 12h
cert-manager deployment.apps/cert-manager 1/1 1 1 12h
cert-manager deployment.apps/cert-manager-webhook 1/1 1 1 12h
metallb-system deployment.apps/controller 1/1 1 1 97m
default deployment.apps/nginx-deploy 1/1 1 1 20m
NAMESPACE NAME DESIRED CURRENT READY AGE
kube-system replicaset.apps/hostpath-provisioner-5c65fbdb4f 1 1 1 41h
kube-system replicaset.apps/coredns-86f78bb79c 1 1 1 41h
kube-system replicaset.apps/calico-kube-controllers-847c8c99d 1 1 1 42h
kube-system replicaset.apps/metrics-server-8bbfb4bdb 1 1 1 41h
kube-system replicaset.apps/kubernetes-dashboard-7ffd448895 1 1 1 41h
kube-system replicaset.apps/dashboard-metrics-scraper-6c4568dc68 1 1 1 41h
container-registry replicaset.apps/registry-9b57d9df8 1 1 1 38h
cert-manager replicaset.apps/cert-manager-cainjector-5c6cb79446 1 1 1 12h
cert-manager replicaset.apps/cert-manager-794657589 1 1 1 12h
cert-manager replicaset.apps/cert-manager-webhook-574c9758c9 1 1 1 12h
metallb-system replicaset.apps/controller-559b68bfd8 1 1 1 97m
default replicaset.apps/nginx-deploy-5797b88878 1 1 1 20m
It looks like Metallb works, as the ingress services received an ip from the pool I specified.
Now, when I try to deploy an ingress to reach the nginx deployment, I dont get the ADDRESS:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
name: ingress-nginx-deploy
spec:
rules:
- host: test.com
http:
paths:
- backend:
serviceName: nginx-deploy
servicePort: 80
NAMESPACE NAME CLASS HOSTS ADDRESS PORTS AGE
default ingress-nginx-deploy <none> test.com 80 13m
An help would be really appreciated. Thank you!

TL;DR
There are some ways to fix your Ingress so that it would get the IP address.
You can either:
Delete the kubernetes.io/ingress.class: nginx and add ingressClassName: public under spec section.
Use the newer example (apiVersion) from official documentation that by default will have assigned an IngressClass:
Kubernetes.io: Docs: Concepts: Services networking: Ingress
Example of Ingress resource that will fix your issue:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-nginx-deploy
spec:
ingressClassName: public
# above field is optional as microk8s default ingressclass will be assigned
rules:
- host: test.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: nginx-deploy
port:
number: 80
You can read more about IngressClass by following official documentation:
Kubernetes.io: Blog: Improvements to the Ingress API in Kubernetes 1.18
I've included more explanation that should shed some additional light on this particular setup.
After you apply above Ingress resource the output of:
$ kubectl get ingress
Will be following:
NAME CLASS HOSTS ADDRESS PORTS AGE
ingress-nginx-deploy public test.com 127.0.0.1 80 43s
As you can see the ADDRESS contains 127.0.0.1. It's because this particular Ingress controller enabled by an addon, binds to your host (MicroK8S node) to ports 80,443.
You can see it by running:
$ sudo microk8s kubectl get daemonset -n ingress nginx-ingress-microk8s-controller -o yaml
A side note!
Look for hostPort and securityContext.capabilities.
The Service of type LoadBalancer created by you will work with your Ingress controller but it will not be displayed under ADDRESS in $ kubectl get ingress.
A side note!
Please remember that in this particular setup you will need to connect to your Ingress controller with a Header Host: test.com unless you have DNS resolution configured to support your setup. Otherwise you will get a 404.
Additional resource:
Github.com: Ubuntu: Microk8s: Ingress with MetalLb issues
Kubernetes.io: Docs: Concepts: Configuration: Overview

Related

Unable to reach pod service from kubernetes master node , from worker nodes it is working

I have done a fresh kubernetes installation in my vm setup .I have two centos-8 servers which are master and slave. both are configured with 'network bridged'. kubernetes version is v1.21.9 , docker version is 23.0.0. I have deployed a simple hello world nodejs app as pod. these are the currently running pods
The issue Is I Can access the pod service through it's nod's IP address as http://192.168.1.27:31500/ But I'm unable to access the pod service from master node( expecting it to work in http://192.168.1.26:31500/) , can some one help me to resolve this?
there are no restarts in k8 network components and as I have checked there are no errors in kube-proxy pods
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
default helloworldnodejsapp-deployment-86966cfcc5-85dgm 1/1 Running 0 17m 10.244.1.2 worker-server27 <none> <none>
kube-flannel kube-flannel-ds-226w7 1/1 Running 0 24m 192.168.1.27 worker-server27 <none> <none>
kube-flannel kube-flannel-ds-4cdhn 1/1 Running 0 63m 192.168.1.26 master-server26 <none> <none>
kube-system coredns-558bd4d5db-ht6sp 1/1 Running 0 63m 10.244.0.3 master-server26 <none> <none>
kube-system coredns-558bd4d5db-wq774 1/1 Running 0 63m 10.244.0.2 master-server26 <none> <none>
kube-system etcd-master-server26 1/1 Running 0 64m 192.168.1.26 master-server26 <none> <none>
kube-system kube-apiserver-master-server26 1/1 Running 0 64m 192.168.1.26 master-server26 <none> <none>
kube-system kube-controller-manager-master-server26 1/1 Running 0 64m 192.168.1.26 master-server26 <none> <none>
kube-system kube-proxy-ftsmp 1/1 Running 0 63m 192.168.1.26 master-server26 <none> <none>
kube-system kube-proxy-xhccg 1/1 Running 0 24m 192.168.1.27 worker-server27 <none> <none>
kube-system kube-scheduler-master-server26 1/1 Running 0 64m 192.168.1.26 master-server26 <none> <none>
Node details
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
master-server26 Ready control-plane,master 70m v1.21.9 192.168.1.26 <none> CentOS Stream 8 4.18.0-448.el8.x86_64 docker://23.0.0
worker-server27 Ready <none> 30m v1.21.9 192.168.1.27 <none> CentOS Stream 8 4.18.0-448.el8.x86_64 docker://23.0.0
configuration of /etc/docker/daemon.json
{
"exec-opts": ["native.cgroupdriver=systemd"],"dns": ["8.8.8.8", "8.8.4.4","192.168.1.1"]
}
Hello world pod deployment yaml file
apiVersion: apps/v1
kind: Deployment
metadata:
name: helloworldnodejsapp-deployment
labels:
app: web
spec:
selector:
matchLabels:
app: web
replicas: 1
strategy:
type: RollingUpdate
template:
metadata:
labels:
app: web
spec:
containers:
- name: helloworldnodejsapp
image: "********:helloworldnodejs"
ports:
- containerPort: 8010
imagePullSecrets:
- name: regcred
---
apiVersion: v1
kind: Service
metadata:
name: helloworldnodejsapp-svc
labels:
app: web
spec:
type: NodePort
selector:
app: web
ports:
- port: 8010
targetPort: 8010
nodePort: 31500
Form the explanation I got following details
Node IP: 192.168.1.27
Master node IP: 192.168.1.26
Port: 31500
And you want to access the app using your master node IP which is 192.168.1.26. By default you can’t access your application directly using your master node ip because the pod is present on your worker node(192.168.1.27) even when you configured NodePort it will be binded to the worker node’s IP. So you need to expose your application using the clusterIP for accessing your application using the master node IP. follow this documentation for more details.

How to access kubernetes microk8s dashboard remotely without Ingress?

I am new to Kubernetes and i am trying to deploy a MicroKubernetes cluster on 4 raspberry PIs.
I am struggling with setting up the dashboard since (no joke) a total of about 30 hours now and starting to be extremely frustrated .
I just cannot access the dashboard remotely.
Solutions that didnt work out:
No.1 Ingress:
I managed to enable ingress but it seems to be extremely complicated to connect it to the dashboard since i manually have to resolve DNS properties inside pods and host machines.
I eventually gave up on that. There is also no documentation whatsoever available how to set an ingress up without having a valid bought domain pointing at your Ingress Node.
If you are able to guide me through this, i am up for it.
No.2 Change service type of dashboard to LoadBalancer or NodePort:
With this method i can actually expose the dashboard... but it can only be accessed through https.... Since dashbaord seems to use self signed certificates or some other mechanism i cannot access the dashboard via a browser. The browsers(chrome firefox) always refuse to connect to the dashboard... When i try to access via http the browsers say i need to use https.
No.3 kube-proxy:
This only allows Localhost connections. YOu can pass arguments to kube proxy to allow other hosts to access the dashboard... but then again we have the https/http problem
At this point it is just amazing to me how extremly hard it is to just access this simple dashboard... Can anybody give any advice on how to access it ?
a#k8s-node-1:~/kubernetes$ kctl describe service kubernetes-dashboard -n kube-system
Name: kubernetes-dashboard
Namespace: kube-system
Labels: k8s-app=kubernetes-dashboard
Annotations: <none>
Selector: k8s-app=kubernetes-dashboard
Type: NodePort
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.152.183.249
IPs: 10.152.183.249
Port: <unset> 443/TCP
TargetPort: 8443/TCP
NodePort: <unset> 32228/TCP
Endpoints: 10.1.140.67:8443
Session Affinity: None
External Traffic Policy: Cluster
$ kubectl edit svc -n kube-system kubernetes-dashboard
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
kind: Service
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"k8s-app":"kubernetes-dashboard"},"name":"kubernetes-dashboard","namespace":"kube-system"},"spec":{"ports":[{"port":443,"targetPort":8443}],"selector":{"k8s>
creationTimestamp: "2022-03-21T14:30:10Z"
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
resourceVersion: "43060"
selfLink: /api/v1/namespaces/kube-system/services/kubernetes-dashboard
uid: fcb45ccc-070b-4a4d-b987-41f5b7777559
spec:
clusterIP: 10.152.183.249
clusterIPs:
- 10.152.183.249
externalTrafficPolicy: Cluster
internalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- nodePort: 32228
port: 443
protocol: TCP
targetPort: 8443
selector:
k8s-app: kubernetes-dashboard
sessionAffinity: None
type: NodePort
status:
loadBalancer: {}
a#k8s-node-1:~/kubernetes$ kctl get services -n kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
metrics-server ClusterIP 10.152.183.233 <none> 443/TCP 165m
kube-dns ClusterIP 10.152.183.10 <none> 53/UDP,53/TCP,9153/TCP 142m
dashboard-metrics-scraper ClusterIP 10.152.183.202 <none> 8000/TCP 32m
kubernetes-dashboard NodePort 10.152.183.249 <none> 443:32228/TCP 32m
a#k8s-node-1:~/kubernetes$ cat dashboard-ingress.yaml
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
name: dashboard
namespace: kube-system
spec:
rules:
- host: nonexistent.net
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: kubernetes-dashboard
port:
number: 8080
a#k8s-node-1:~/kubernetes$ kctl get pods --all-namespaces -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system calico-node-c4shb 1/1 Running 0 3h23m 192.168.180.47 k8s-node-2 <none> <none>
ingress nginx-ingress-microk8s-controller-nvcvx 1/1 Running 0 3h12m 10.1.140.66 k8s-node-2 <none> <none>
kube-system calico-node-ptwmk 1/1 Running 0 3h23m 192.168.180.48 k8s-node-3 <none> <none>
ingress nginx-ingress-microk8s-controller-hksg7 1/1 Running 0 3h12m 10.1.55.131 k8s-node-4 <none> <none>
ingress nginx-ingress-microk8s-controller-tk9dj 1/1 Running 0 3h12m 10.1.76.129 k8s-node-3 <none> <none>
ingress nginx-ingress-microk8s-controller-c8t54 1/1 Running 0 3h12m 10.1.109.66 k8s-node-1 <none> <none>
kube-system calico-node-k65fz 1/1 Running 0 3h22m 192.168.180.52 k8s-node-4 <none> <none>
kube-system coredns-64c6478b6c-584s8 1/1 Running 0 177m 10.1.109.67 k8s-node-1 <none> <none>
kube-system calico-kube-controllers-6966456d6b-vvnm6 1/1 Running 0 3h24m 10.1.109.65 k8s-node-1 <none> <none>
kube-system calico-node-7jhz9 1/1 Running 0 3h33m 192.168.180.46 k8s-node-1 <none> <none>
kube-system metrics-server-647bdc584d-ldf8q 1/1 Running 1 (3h19m ago) 3h20m 10.1.55.129 k8s-node-4 <none> <none>
kube-system kubernetes-dashboard-585bdb5648-8s9xt 1/1 Running 0 67m 10.1.140.67 k8s-node-2 <none> <none>
kube-system dashboard-metrics-scraper-69d9497b54-x7vt9 1/1 Running 0 67m 10.1.55.132 k8s-node-4 <none> <none>
Using an ingress is indeed the preferred way, but since you seem to have trouble in your environment, you can indeed use a LoadBalancer service.
To avoid the problem with the automatically generated certificates, provide your certificate and private key to the dashboard, for example as a secret, and use the flags --tls-key-file and --tls-cert-file to point to the certificate. More details: https://github.com/kubernetes/dashboard/blob/master/docs/user/certificate-management.md

k3s on arch linux ARM worker service not responding

Current setup:
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
cl01mtr01 Ready master 104m v1.18.2+k3s1 10.1.1.1 <none> Debian GNU/Linux 10 (buster) 4.19.0-9-amd64 containerd://1.3.3-k3s2
cl01wkr01 Ready <none> 9m20s v1.18.2+k3s1 10.1.1.101 <none> Arch Linux ARM 5.4.40-1-ARCH containerd://1.3.3-k3s2
Master installed with:
export INSTALL_K3S_VERSION="v1.18.2+k3s1"
curl -sSLf https://get.k3s.io | sh -s - server \
--write-kubeconfig-mode 644 \
--cluster-cidr 172.20.0.0/16 \
--service-cidr 172.21.0.0/16 \
--cluster-dns 172.21.0.10 \
--disable traefik
Worker installed with:
export INSTALL_K3S_VERSION="v1.18.2+k3s1"
curl -sSLf https://get.k3s.io | sh -s - agent \
--server https://10.1.1.1:6443 \
--token <token from master>
I also tried with a raspberry pi as master running arch linux and raspbian and a rock pi 64 with armbian.
I tried with k3s versions:
v1.17.4+k3s1
v1.17.5+k3s1
v1.18.2+k3s1
I also tested with docker and the --docker install option in k3s.
The nodes get discovered (as shown above), but I cannot access the service on my worker node(s) (raspberry pi 3 with arch linux arm) via http://10.1.1.1:30001 although, it can be accessed via kubectl exec.
I always get a connection timeout
This site can’t be reached
10.1.1.1 took too long to respond.
When the pod runs on the master node, or if the worker is an amd64 node, it can be accessed via http://10.1.1.1:30001.
This is the resource I try to load and access:
apiVersion: v1
kind: Namespace
metadata:
name: nginx
---
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-default-configmap
namespace: nginx
data:
default.conf: |
server {
listen 80;
listen [::]:80;
#server_name localhost;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
---
apiVersion: v1
kind: Service
metadata:
name: nginx-service
namespace: nginx
spec:
ports:
- name: http
targetPort: 80
port: 80
nodePort: 30001
- name: https
targetPort: 443
port: 443
nodePort: 30002
selector:
app: nginx
type: NodePort
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: nginx-daemonset
namespace: nginx
labels:
app: nginx
spec:
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: node-role.kubernetes.io/master
operator: NotIn
values:
- "true"
containers:
- name: nginx
image: nginx:stable
imagePullPolicy: Always
env:
- name: TZ
value: "Europe/Brussels"
ports:
- name: http
containerPort: 80
- name: https
containerPort: 443
volumeMounts:
- name: default-conf
mountPath: /etc/nginx/conf.d/default.conf
subPath: default.conf
readOnly: true
restartPolicy: Always
volumes:
- name: default-conf
configMap:
name: nginx-default-configmap
Some extra info:
> kubectl get all -A -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system pod/local-path-provisioner-6d59f47c7-d477m 1/1 Running 0 116m 172.20.0.4 cl01mtr01 <none> <none>
kube-system pod/metrics-server-7566d596c8-fbb7b 1/1 Running 0 116m 172.20.0.2 cl01mtr01 <none> <none>
kube-system pod/coredns-8655855d6-gnbsm 1/1 Running 0 116m 172.20.0.3 cl01mtr01 <none> <none>
nginx pod/nginx-daemonset-l4j7s 1/1 Running 0 52s 172.20.1.3 cl01wkr01 <none> <none>
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
default service/kubernetes ClusterIP 172.21.0.1 <none> 443/TCP 116m <none>
kube-system service/kube-dns ClusterIP 172.21.0.10 <none> 53/UDP,53/TCP,9153/TCP 116m k8s-app=kube-dns
kube-system service/metrics-server ClusterIP 172.21.152.234 <none> 443/TCP 116m k8s-app=metrics-server
nginx service/nginx-service NodePort 172.21.14.185 <none> 80:30001/TCP,443:30002/TCP 52s app=nginx
NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE CONTAINERS IMAGES SELECTOR
nginx daemonset.apps/nginx-daemonset 1 1 1 1 1 <none> 52s nginx nginx:stable app=nginx
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
kube-system deployment.apps/local-path-provisioner 1/1 1 1 116m local-path-provisioner rancher/local-path-provisioner:v0.0.11 app=local-path-provisioner
kube-system deployment.apps/metrics-server 1/1 1 1 116m metrics-server rancher/metrics-server:v0.3.6 k8s-app=metrics-server
kube-system deployment.apps/coredns 1/1 1 1 116m coredns rancher/coredns-coredns:1.6.3 k8s-app=kube-dns
NAMESPACE NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR
kube-system replicaset.apps/local-path-provisioner-6d59f47c7 1 1 1 116m local-path-provisioner rancher/local-path-provisioner:v0.0.11 app=local-path-provisioner,pod-template-hash=6d59f47c7
kube-system replicaset.apps/metrics-server-7566d596c8 1 1 1 116m metrics-server rancher/metrics-server:v0.3.6 k8s-app=metrics-server,pod-template-hash=7566d596c8
kube-system replicaset.apps/coredns-8655855d6 1 1 1 116m coredns rancher/coredns-coredns:1.6.3 k8s-app=kube-dns,pod-template-hash=8655855d6

How to Exposing app in kubernetes with consul

We have installed consul through helm charts on k8 cluster. Here, I have deployed one consul server and the rest are consul agents.
kubectl get pods
NAME READY STATUS RESTARTS AGE
consul-7csp9 1/1 Running 0 4h
consul-connect-injector-webhook-deployment-66d46867f6-wqtt7 1/1 Running 0 4h
consul-server-0 1/1 Running 0 4h
consul-sync-catalog-85f5654b89-9qblx 1/1 Running 0 4h
consul-x4mqq 1/1 Running 0 4h
We see that the nodes are registered onto the Consul Server. http://XX.XX.XX.XX/ui/kube/nodes
We have deployed an hello world application onto k8 cluster. This will bring-up Hello-World
kubectl get pods
NAME READY STATUS RESTARTS AGE
consul-7csp9 1/1 Running 0 4h
consul-connect-injector-webhook-deployment-66d46867f6-wqtt7 1/1 Running 0 4h
consul-server-0 1/1 Running 0 4h
consul-sync-catalog-85f5654b89-9qblx 1/1 Running 0 4h
consul-x4mqq 1/1 Running 0 4h
sampleapp-69bf9f84-ms55k 2/2 Running 0 4h
Below is the yaml file.
apiVersion: apps/v1
kind: Deployment
metadata:
name: sampleapp
spec:
replicas: 1
selector:
matchLabels:
app: sampleapp
template:
metadata:
labels:
app: sampleapp
annotations:
"consul.hashicorp.com/connect-inject": "true"
spec:
containers:
- name: sampleapp
image: "docker-dev-repo.aws.com/sampleapp-java/helloworld-service:a8c9f65-65"
ports:
- containerPort: 8080
name: http
Successful deployment of sampleapp, I see that sampleapp-proxy is registered in consul. and sampleapp-proxy is listed in kubernetes services. (This is because the toConsul and toK8S are passed as true during installation)
kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
consul ExternalName <none> consul.service.test <none> 4h
consul-connect-injector-svc ClusterIP XX.XX.XX.XX <none> 443/TCP 4h
consul-dns ClusterIP XX.XX.XX.XX <none> 53/TCP,53/UDP 4h
consul-server ClusterIP None <none> 8500/TCP,8301/TCP,8301/UDP,8302/TCP,8302/UDP,8300/TCP,8600/TCP,8600/UDP 4h
consul-ui LoadBalancer XX.XX.XX.XX XX.XX.XX.XX 80:32648/TCP 4h
dns-test-proxy ExternalName <none> dns-test-proxy.service.test <none> 2h
fluentd-gcp-proxy ExternalName <none> fluentd-gcp-proxy.service.test <none> 33m
kubernetes ClusterIP XX.XX.XX.XX <none> 443/TCP 5d
sampleapp-proxy ExternalName <none> sampleapp-proxy.service.test <none> 4h
How can I access my sampleapp? Should I expose my application as kube service again?
Earlier, without consul, we used a create a service for the sampleapp and expose the service as ingress. Using the Ingress Loadbalancer, we used to access our application.
Consul does not provide any new ways to expose your apps. You need to create ingress Loadbalancer as before.

Accessing service using istio ingress gives 503 error when mTLS is enabled

I have a mutual TLS enabled Istio mesh. My setup is as follows
A service running inside a pod (Service container + envoy)
An envoy gateway which stays in front of the above service. An Istio Gateway and Virtual Service attached to this. It routes /info/ route to the above service.
Another Istio Gateway configured for ingress using the default istio ingress pod. This also has Gateway+Virtual Service combination. The virtual service directs /info/ path to the service described in 2
I'm attempting to access the service from the ingress gateway using a curl command such as:
$ curl -X GET http://istio-ingressgateway.istio-system:80/info/ -H "Authorization: Bearer $token" -v
But I'm getting a 503 not found error as below:
$ curl -X GET http://istio-ingressgateway.istio-system:80/info/ -H "Authorization: Bearer $token" -v
Note: Unnecessary use of -X or --request, GET is already inferred.
* Trying 10.105.138.94...
* Connected to istio-ingressgateway.istio-system (10.105.138.94) port 80 (#0)
> GET /info/ HTTP/1.1
> Host: istio-ingressgateway.istio-system
> User-Agent: curl/7.47.0
> Accept: */*
> Authorization: Bearer ...
>
< HTTP/1.1 503 Service Unavailable
< content-length: 57
< content-type: text/plain
< date: Sat, 12 Jan 2019 13:30:13 GMT
< server: envoy
<
* Connection #0 to host istio-ingressgateway.istio-system left intact
I checked the logs of istio-ingressgateway pod and the following line was logged there
[2019-01-13T05:40:16.517Z] "GET /info/ HTTP/1.1" 503 UH 0 19 6 - "10.244.0.5" "curl/7.47.0" "da02fdce-8bb5-90fe-b422-5c74fe28759b" "istio-ingressgateway.istio-system" "-"
If I logged into istio ingress pod and attempt to send the request with curl, I get a successful 200 OK.
# curl hr--gateway-service.default/info/ -H "Authorization: Bearer $token" -v
Also, I managed to get a successful response for the same curl command when the mesh was created in mTLS disabled mode. There are no conflicts shown in mTLS setup.
Here are the config details for my service mesh in case you need additional info.
Pods
$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
default hr--gateway-deployment-688986c87c-z9nkh 1/1 Running 0 37m
default hr--hr-deployment-596946948d-c89bn 2/2 Running 0 37m
default hr--sts-deployment-694d7cff97-gjwdk 1/1 Running 0 37m
ingress-nginx default-http-backend-6586bc58b6-8qss6 1/1 Running 0 42m
ingress-nginx nginx-ingress-controller-6bd7c597cb-t4rwq 1/1 Running 0 42m
istio-system grafana-85dbf49c94-lfpbr 1/1 Running 0 42m
istio-system istio-citadel-545f49c58b-dq5lq 1/1 Running 0 42m
istio-system istio-cleanup-secrets-bh5ws 0/1 Completed 0 42m
istio-system istio-egressgateway-7d59954f4-qcnxm 1/1 Running 0 42m
istio-system istio-galley-5b6449c48f-72vkb 1/1 Running 0 42m
istio-system istio-grafana-post-install-lwmsf 0/1 Completed 0 42m
istio-system istio-ingressgateway-8455c8c6f7-5khtk 1/1 Running 0 42m
istio-system istio-pilot-58ff4d6647-bct4b 2/2 Running 0 42m
istio-system istio-policy-59685fd869-h7v94 2/2 Running 0 42m
istio-system istio-security-post-install-cqj6k 0/1 Completed 0 42m
istio-system istio-sidecar-injector-75b9866679-qg88s 1/1 Running 0 42m
istio-system istio-statsd-prom-bridge-549d687fd9-bspj2 1/1 Running 0 42m
istio-system istio-telemetry-6ccf9ddb96-hxnwv 2/2 Running 0 42m
istio-system istio-tracing-7596597bd7-m5pk8 1/1 Running 0 42m
istio-system prometheus-6ffc56584f-4cm5v 1/1 Running 0 42m
istio-system servicegraph-5d64b457b4-jttl9 1/1 Running 0 42m
kube-system coredns-78fcdf6894-rxw57 1/1 Running 0 50m
kube-system coredns-78fcdf6894-s4bg2 1/1 Running 0 50m
kube-system etcd-ubuntu 1/1 Running 0 49m
kube-system kube-apiserver-ubuntu 1/1 Running 0 49m
kube-system kube-controller-manager-ubuntu 1/1 Running 0 49m
kube-system kube-flannel-ds-9nvf9 1/1 Running 0 49m
kube-system kube-proxy-r868m 1/1 Running 0 50m
kube-system kube-scheduler-ubuntu 1/1 Running 0 49m
Services
$ kubectl get svc --all-namespaces
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default hr--gateway-service ClusterIP 10.100.238.144 <none> 80/TCP,443/TCP 39m
default hr--hr-service ClusterIP 10.96.193.43 <none> 80/TCP 39m
default hr--sts-service ClusterIP 10.99.54.137 <none> 8080/TCP,8081/TCP,8090/TCP 39m
default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 52m
ingress-nginx default-http-backend ClusterIP 10.109.166.229 <none> 80/TCP 44m
ingress-nginx ingress-nginx NodePort 10.108.9.180 192.168.60.3 80:31001/TCP,443:32315/TCP 44m
istio-system grafana ClusterIP 10.102.141.231 <none> 3000/TCP 44m
istio-system istio-citadel ClusterIP 10.101.128.187 <none> 8060/TCP,9093/TCP 44m
istio-system istio-egressgateway ClusterIP 10.102.157.204 <none> 80/TCP,443/TCP 44m
istio-system istio-galley ClusterIP 10.96.31.251 <none> 443/TCP,9093/TCP 44m
istio-system istio-ingressgateway LoadBalancer 10.105.138.94 <pending> 80:31380/TCP,443:31390/TCP,31400:31400/TCP,15011:31219/TCP,8060:31482/TCP,853:30034/TCP,15030:31544/TCP,15031:32652/TCP 44m
istio-system istio-pilot ClusterIP 10.100.170.73 <none> 15010/TCP,15011/TCP,8080/TCP,9093/TCP 44m
istio-system istio-policy ClusterIP 10.104.77.184 <none> 9091/TCP,15004/TCP,9093/TCP 44m
istio-system istio-sidecar-injector ClusterIP 10.100.180.152 <none> 443/TCP 44m
istio-system istio-statsd-prom-bridge ClusterIP 10.107.39.50 <none> 9102/TCP,9125/UDP 44m
istio-system istio-telemetry ClusterIP 10.110.55.232 <none> 9091/TCP,15004/TCP,9093/TCP,42422/TCP 44m
istio-system jaeger-agent ClusterIP None <none> 5775/UDP,6831/UDP,6832/UDP 44m
istio-system jaeger-collector ClusterIP 10.102.43.21 <none> 14267/TCP,14268/TCP 44m
istio-system jaeger-query ClusterIP 10.104.182.189 <none> 16686/TCP 44m
istio-system prometheus ClusterIP 10.100.0.70 <none> 9090/TCP 44m
istio-system servicegraph ClusterIP 10.97.65.37 <none> 8088/TCP 44m
istio-system tracing ClusterIP 10.109.87.118 <none> 80/TCP 44m
kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 52m
Gateway and virtual service described in point 2
$ kubectl describe gateways.networking.istio.io hr--gateway
Name: hr--gateway
Namespace: default
API Version: networking.istio.io/v1alpha3
Kind: Gateway
Metadata:
...
Spec:
Selector:
App: hr--gateway
Servers:
Hosts:
*
Port:
Name: http2
Number: 80
Protocol: HTTP2
Hosts:
*
Port:
Name: https
Number: 443
Protocol: HTTPS
Tls:
Mode: PASSTHROUGH
$ kubectl describe virtualservices.networking.istio.io hr--gateway
Name: hr--gateway
Namespace: default
Labels: app=hr--gateway
Annotations: <none>
API Version: networking.istio.io/v1alpha3
Kind: VirtualService
Metadata:
...
Spec:
Gateways:
hr--gateway
Hosts:
*
Http:
Match:
Uri:
Prefix: /info/
Rewrite:
Uri: /
Route:
Destination:
Host: hr--hr-service
Gateway and virtual service described in point 3
$ kubectl describe gateways.networking.istio.io ingress-gateway
Name: ingress-gateway
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"networking.istio.io/v1alpha3","kind":"Gateway","metadata":{"annotations":{},"name":"ingress-gateway","namespace":"default"},"spec":{"sel...
API Version: networking.istio.io/v1alpha3
Kind: Gateway
Metadata:
...
Spec:
Selector:
Istio: ingressgateway
Servers:
Hosts:
*
Port:
Name: http2
Number: 80
Protocol: HTTP2
$ kubectl describe virtualservices.networking.istio.io hr--gateway-ingress-vs
Name: hr--gateway-ingress-vs
Namespace: default
Labels: app=hr--gateway
API Version: networking.istio.io/v1alpha3
Kind: VirtualService
Metadata:
Spec:
Gateways:
ingress-gateway
Hosts:
*
Http:
Match:
Uri:
Prefix: /info/
Route:
Destination:
Host: hr--gateway-service
Events: <none>
The problem is probably as follows: istio-ingressgateway initiates mTLS to hr--gateway-service on port 80, but hr--gateway-service expects plain HTTP connections.
There are multiple solutions:
Define a DestinationRule to instruct clients to disable mTLS on calls to hr--gateway-service
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: hr--gateway-service-disable-mtls
spec:
host: hr--gateway-service.default.svc.cluster.local
trafficPolicy:
tls:
mode: DISABLE
Instruct hr-gateway-service to accept mTLS connections. For that, configure the server TLS options on port 80 to be MUTUAL and to use Istio certificates and the private key. Specify serverCertificate, caCertificates and privateKey to be /etc/certs/cert-chain.pem, /etc/certs/root-cert.pem, /etc/certs/key.pem, respectively.