Pods on different node not working kubernetes - kubernetes

I hope someone can give me some help about this issue.
I'm testing a containerized microservice over a kubernetes cluster made by 2 nodes:
Merry -> master (and worker)
Pippin -> worker
This is my deployment:
kind: Deployment
metadata:
name: resize
spec:
selector:
matchLabels:
run: resize
replicas: 1
template:
metadata:
labels:
run: resize
spec:
containers:
- name: resize
image: mdndocker/simpleweb
ports:
- containerPort: 1337
resources:
limits:
cpu: 200m
requests:
cpu: 100m
This is the service:
apiVersion: v1
kind: Service
metadata:
name: resize
labels:
run: resize
spec:
type: ClusterIP
ports:
- port: 8080
protocol: TCP
targetPort: 1337
selector:
run: resize
I'm using calico network.
I scaled the replicas before to 0 and than to 8 for have multiple instances of my app in both nodes.
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
locust-77c699c94d-k8ssz 1/1 Running 0 17m 192.168.61.160 pippin <none> <none>
resize-d8cd49f6c-2tk62 1/1 Running 0 64m 192.168.61.158 pippin <none> <none>
resize-d8cd49f6c-6g2f9 1/1 Running 0 64m 192.168.61.155 pippin <none> <none>
resize-d8cd49f6c-7795n 1/1 Running 0 64m 172.17.0.8 merry <none> <none>
resize-d8cd49f6c-jvw49 1/1 Running 0 64m 192.168.61.156 pippin <none> <none>
resize-d8cd49f6c-mml47 1/1 Running 0 64m 192.168.61.157 pippin <none> <none>
resize-d8cd49f6c-qpkpk 1/1 Running 0 64m 172.17.0.6 merry <none> <none>
resize-d8cd49f6c-t4t8z 1/1 Running 0 64m 172.17.0.5 merry <none> <none>
resize-d8cd49f6c-vmpkp 1/1 Running 0 64m 172.17.0.7 merry <none> <none>
I got some pods running on Pippin and others on Merry. Unfortunately the 4 pods scheduled on Merry don't get any pod when the load is generated:
NAME CPU(cores) MEMORY(bytes)
locust-77c699c94d-k8ssz 873m 82Mi
resize-d8cd49f6c-2tk62 71m 104Mi
resize-d8cd49f6c-6g2f9 67m 107Mi
resize-d8cd49f6c-7795n 0m 31Mi
resize-d8cd49f6c-jvw49 78m 104Mi
resize-d8cd49f6c-mml47 73m 105Mi
resize-d8cd49f6c-qpkpk 0m 32Mi
resize-d8cd49f6c-t4t8z 0m 31Mi
resize-d8cd49f6c-vmpkp 0m 31Mi
Do you know why this is happening? and what I can check for solve this issue?
Do you know why the IP Address of pods are different on nodes even if I used the --pod-network-cidr=192.168.0.0/24 ?
thanks for who can help me!

The pods which got deployed on master node "merry" are in running status so there no issue. For your other query why master node has different CIDR values if you have jq installed run "kubectl get node merry -o json | jq '.spec.podCIDR' which will give cidr values used. or you can describe master node.

Related

Unable to reach pod service from kubernetes master node , from worker nodes it is working

I have done a fresh kubernetes installation in my vm setup .I have two centos-8 servers which are master and slave. both are configured with 'network bridged'. kubernetes version is v1.21.9 , docker version is 23.0.0. I have deployed a simple hello world nodejs app as pod. these are the currently running pods
The issue Is I Can access the pod service through it's nod's IP address as http://192.168.1.27:31500/ But I'm unable to access the pod service from master node( expecting it to work in http://192.168.1.26:31500/) , can some one help me to resolve this?
there are no restarts in k8 network components and as I have checked there are no errors in kube-proxy pods
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
default helloworldnodejsapp-deployment-86966cfcc5-85dgm 1/1 Running 0 17m 10.244.1.2 worker-server27 <none> <none>
kube-flannel kube-flannel-ds-226w7 1/1 Running 0 24m 192.168.1.27 worker-server27 <none> <none>
kube-flannel kube-flannel-ds-4cdhn 1/1 Running 0 63m 192.168.1.26 master-server26 <none> <none>
kube-system coredns-558bd4d5db-ht6sp 1/1 Running 0 63m 10.244.0.3 master-server26 <none> <none>
kube-system coredns-558bd4d5db-wq774 1/1 Running 0 63m 10.244.0.2 master-server26 <none> <none>
kube-system etcd-master-server26 1/1 Running 0 64m 192.168.1.26 master-server26 <none> <none>
kube-system kube-apiserver-master-server26 1/1 Running 0 64m 192.168.1.26 master-server26 <none> <none>
kube-system kube-controller-manager-master-server26 1/1 Running 0 64m 192.168.1.26 master-server26 <none> <none>
kube-system kube-proxy-ftsmp 1/1 Running 0 63m 192.168.1.26 master-server26 <none> <none>
kube-system kube-proxy-xhccg 1/1 Running 0 24m 192.168.1.27 worker-server27 <none> <none>
kube-system kube-scheduler-master-server26 1/1 Running 0 64m 192.168.1.26 master-server26 <none> <none>
Node details
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
master-server26 Ready control-plane,master 70m v1.21.9 192.168.1.26 <none> CentOS Stream 8 4.18.0-448.el8.x86_64 docker://23.0.0
worker-server27 Ready <none> 30m v1.21.9 192.168.1.27 <none> CentOS Stream 8 4.18.0-448.el8.x86_64 docker://23.0.0
configuration of /etc/docker/daemon.json
{
"exec-opts": ["native.cgroupdriver=systemd"],"dns": ["8.8.8.8", "8.8.4.4","192.168.1.1"]
}
Hello world pod deployment yaml file
apiVersion: apps/v1
kind: Deployment
metadata:
name: helloworldnodejsapp-deployment
labels:
app: web
spec:
selector:
matchLabels:
app: web
replicas: 1
strategy:
type: RollingUpdate
template:
metadata:
labels:
app: web
spec:
containers:
- name: helloworldnodejsapp
image: "********:helloworldnodejs"
ports:
- containerPort: 8010
imagePullSecrets:
- name: regcred
---
apiVersion: v1
kind: Service
metadata:
name: helloworldnodejsapp-svc
labels:
app: web
spec:
type: NodePort
selector:
app: web
ports:
- port: 8010
targetPort: 8010
nodePort: 31500
Form the explanation I got following details
Node IP: 192.168.1.27
Master node IP: 192.168.1.26
Port: 31500
And you want to access the app using your master node IP which is 192.168.1.26. By default you can’t access your application directly using your master node ip because the pod is present on your worker node(192.168.1.27) even when you configured NodePort it will be binded to the worker node’s IP. So you need to expose your application using the clusterIP for accessing your application using the master node IP. follow this documentation for more details.

why pods created by the Deployment running on NotReady node all the time

I have three nodes. when I shutdown cdh-k8s-3.novalocal ,pods running on it all the time
# kubectl get node
NAME STATUS ROLES AGE VERSION
cdh-k8s-1.novalocal Ready control-plane,master 15d v1.20.0
cdh-k8s-2.novalocal Ready <none> 9d v1.20.0
cdh-k8s-3.novalocal NotReady <none> 9d v1.20.0
# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-deployment-66b6c48dd5-5jtqv 1/1 Running 0 3h28m 10.244.26.110 cdh-k8s-3.novalocal <none> <none>
nginx-deployment-66b6c48dd5-fntn4 1/1 Running 0 3h28m 10.244.26.108 cdh-k8s-3.novalocal <none> <none>
nginx-deployment-66b6c48dd5-vz7hr 1/1 Running 0 3h28m 10.244.26.109 cdh-k8s-3.novalocal <none> <none>
my yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
# kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-deployment 0/3 3 0 3h28m
I find the Doc
DaemonSet pods are created with NoExecute tolerations for the following taints with no tolerationSeconds:
node.kubernetes.io/unreachable
node.kubernetes.io/not-ready
This ensures that DaemonSet pods are never evicted due to these problems.
But it is DaemonSet and not Deployment

Microk8s + metallb + ingress

Im quite new to kubernetes and Im trying to set up a microk8s test environment on a VPS with CentOS.
What I did:
I set up the cluster, enabled the ingress and metallb
microk8s enable ingress
microk8s enable metallb
Exposed the ingress-controller service:
apiVersion: v1
kind: Service
metadata:
name: ingress
namespace: ingress
spec:
type: LoadBalancer
selector:
name: nginx-ingress-microk8s
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
- name: https
protocol: TCP
port: 443
targetPort: 443
Exposed an nginx deployment to test the ingress
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
run: nginx
name: nginx-deploy
spec:
replicas: 1
selector:
matchLabels:
run: nginx-deploy
template:
metadata:
labels:
run: nginx-deploy
spec:
containers:
- image: nginx
name: nginx
This is the status of my cluster:
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system pod/hostpath-provisioner-5c65fbdb4f-m2xq6 1/1 Running 3 41h
kube-system pod/coredns-86f78bb79c-7p8bs 1/1 Running 3 41h
kube-system pod/calico-node-g4ws4 1/1 Running 6 42h
kube-system pod/calico-kube-controllers-847c8c99d-xhmd7 1/1 Running 4 42h
kube-system pod/metrics-server-8bbfb4bdb-ggvk7 1/1 Running 0 41h
kube-system pod/kubernetes-dashboard-7ffd448895-ktv8j 1/1 Running 0 41h
kube-system pod/dashboard-metrics-scraper-6c4568dc68-l4xmg 1/1 Running 0 41h
container-registry pod/registry-9b57d9df8-xjh8d 1/1 Running 0 38h
cert-manager pod/cert-manager-cainjector-5c6cb79446-vv5j2 1/1 Running 0 12h
cert-manager pod/cert-manager-794657589-srrmr 1/1 Running 0 12h
cert-manager pod/cert-manager-webhook-574c9758c9-9dwr6 1/1 Running 0 12h
metallb-system pod/speaker-9gjng 1/1 Running 0 97m
metallb-system pod/controller-559b68bfd8-trk5z 1/1 Running 0 97m
ingress pod/nginx-ingress-microk8s-controller-f6cdb 1/1 Running 0 65m
default pod/nginx-deploy-5797b88878-vgp7x 1/1 Running 0 20m
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default service/kubernetes ClusterIP 10.152.183.1 <none> 443/TCP 42h
kube-system service/kube-dns ClusterIP 10.152.183.10 <none> 53/UDP,53/TCP,9153/TCP 41h
kube-system service/metrics-server ClusterIP 10.152.183.243 <none> 443/TCP 41h
kube-system service/kubernetes-dashboard ClusterIP 10.152.183.225 <none> 443/TCP 41h
kube-system service/dashboard-metrics-scraper ClusterIP 10.152.183.109 <none> 8000/TCP 41h
container-registry service/registry NodePort 10.152.183.44 <none> 5000:32000/TCP 38h
cert-manager service/cert-manager ClusterIP 10.152.183.183 <none> 9402/TCP 12h
cert-manager service/cert-manager-webhook ClusterIP 10.152.183.99 <none> 443/TCP 12h
echoserver service/echoserver ClusterIP 10.152.183.110 <none> 80/TCP 72m
ingress service/ingress LoadBalancer 10.152.183.4 192.168.0.11 80:32617/TCP,443:31867/TCP 64m
default service/nginx-deploy ClusterIP 10.152.183.149 <none> 80/TCP 19m
NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
kube-system daemonset.apps/calico-node 1 1 1 1 1 kubernetes.io/os=linux 42h
metallb-system daemonset.apps/speaker 1 1 1 1 1 beta.kubernetes.io/os=linux 97m
ingress daemonset.apps/nginx-ingress-microk8s-controller 1 1 1 1 1 <none> 65m
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
kube-system deployment.apps/hostpath-provisioner 1/1 1 1 41h
kube-system deployment.apps/coredns 1/1 1 1 41h
kube-system deployment.apps/calico-kube-controllers 1/1 1 1 42h
kube-system deployment.apps/metrics-server 1/1 1 1 41h
kube-system deployment.apps/dashboard-metrics-scraper 1/1 1 1 41h
kube-system deployment.apps/kubernetes-dashboard 1/1 1 1 41h
container-registry deployment.apps/registry 1/1 1 1 38h
cert-manager deployment.apps/cert-manager-cainjector 1/1 1 1 12h
cert-manager deployment.apps/cert-manager 1/1 1 1 12h
cert-manager deployment.apps/cert-manager-webhook 1/1 1 1 12h
metallb-system deployment.apps/controller 1/1 1 1 97m
default deployment.apps/nginx-deploy 1/1 1 1 20m
NAMESPACE NAME DESIRED CURRENT READY AGE
kube-system replicaset.apps/hostpath-provisioner-5c65fbdb4f 1 1 1 41h
kube-system replicaset.apps/coredns-86f78bb79c 1 1 1 41h
kube-system replicaset.apps/calico-kube-controllers-847c8c99d 1 1 1 42h
kube-system replicaset.apps/metrics-server-8bbfb4bdb 1 1 1 41h
kube-system replicaset.apps/kubernetes-dashboard-7ffd448895 1 1 1 41h
kube-system replicaset.apps/dashboard-metrics-scraper-6c4568dc68 1 1 1 41h
container-registry replicaset.apps/registry-9b57d9df8 1 1 1 38h
cert-manager replicaset.apps/cert-manager-cainjector-5c6cb79446 1 1 1 12h
cert-manager replicaset.apps/cert-manager-794657589 1 1 1 12h
cert-manager replicaset.apps/cert-manager-webhook-574c9758c9 1 1 1 12h
metallb-system replicaset.apps/controller-559b68bfd8 1 1 1 97m
default replicaset.apps/nginx-deploy-5797b88878 1 1 1 20m
It looks like Metallb works, as the ingress services received an ip from the pool I specified.
Now, when I try to deploy an ingress to reach the nginx deployment, I dont get the ADDRESS:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
name: ingress-nginx-deploy
spec:
rules:
- host: test.com
http:
paths:
- backend:
serviceName: nginx-deploy
servicePort: 80
NAMESPACE NAME CLASS HOSTS ADDRESS PORTS AGE
default ingress-nginx-deploy <none> test.com 80 13m
An help would be really appreciated. Thank you!
TL;DR
There are some ways to fix your Ingress so that it would get the IP address.
You can either:
Delete the kubernetes.io/ingress.class: nginx and add ingressClassName: public under spec section.
Use the newer example (apiVersion) from official documentation that by default will have assigned an IngressClass:
Kubernetes.io: Docs: Concepts: Services networking: Ingress
Example of Ingress resource that will fix your issue:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-nginx-deploy
spec:
ingressClassName: public
# above field is optional as microk8s default ingressclass will be assigned
rules:
- host: test.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: nginx-deploy
port:
number: 80
You can read more about IngressClass by following official documentation:
Kubernetes.io: Blog: Improvements to the Ingress API in Kubernetes 1.18
I've included more explanation that should shed some additional light on this particular setup.
After you apply above Ingress resource the output of:
$ kubectl get ingress
Will be following:
NAME CLASS HOSTS ADDRESS PORTS AGE
ingress-nginx-deploy public test.com 127.0.0.1 80 43s
As you can see the ADDRESS contains 127.0.0.1. It's because this particular Ingress controller enabled by an addon, binds to your host (MicroK8S node) to ports 80,443.
You can see it by running:
$ sudo microk8s kubectl get daemonset -n ingress nginx-ingress-microk8s-controller -o yaml
A side note!
Look for hostPort and securityContext.capabilities.
The Service of type LoadBalancer created by you will work with your Ingress controller but it will not be displayed under ADDRESS in $ kubectl get ingress.
A side note!
Please remember that in this particular setup you will need to connect to your Ingress controller with a Header Host: test.com unless you have DNS resolution configured to support your setup. Otherwise you will get a 404.
Additional resource:
Github.com: Ubuntu: Microk8s: Ingress with MetalLb issues
Kubernetes.io: Docs: Concepts: Configuration: Overview

Kubernetes coredns pods not running in every pods

With new installation of Kubernetes on Ubuntu with one master and two nodes,
root#master1# kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
master1 Ready master 10h v1.19.3 10.10.10.216 <none> Ubuntu 18.04.5 LTS 4.15.0-122-generic docker://19.3.13
worker1 Ready <none> 10h v1.19.3 10.10.10.211 <none> Ubuntu 18.04.5 LTS 4.15.0-122-generic docker://19.3.13
worker2 Ready <none> 10h v1.19.3 10.10.10.212 <none> Ubuntu 18.04.5 LTS 4.15.0-122-generic docker://19.3.13
i check if all pods in namespace kube-system works.
root#master1:# kubectl get pods -A -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system coredns-f9fd979d6-cggnh 1/1 Running 0 10h 10.244.0.2 master1 <none> <none>
kube-system coredns-f9fd979d6-tnm7c 1/1 Running 0 10h 10.244.0.3 master1 <none> <none>
kube-system etcd-master1 1/1 Running 0 10h 10.10.10.216 master1 <none> <none>
kube-system kube-apiserver-master1 1/1 Running 0 10h 10.10.10.216 master1 <none> <none>
kube-system kube-controller-manager-master1 1/1 Running 0 10h 10.10.10.216 master1 <none> <none>
kube-system kube-flannel-ds-9ph5c 1/1 Running 0 10h 10.10.10.216 master1 <none> <none>
kube-system kube-flannel-ds-fjkng 1/1 Running 0 10h 10.10.10.212 worker2 <none> <none>
kube-system kube-flannel-ds-rfkqd 1/1 Running 0 9h 10.10.10.211 worker1 <none> <none>
kube-system kube-proxy-j7s2m 1/1 Running 0 10h 10.10.10.216 master1 <none> <none>
kube-system kube-proxy-n7279 1/1 Running 0 10h 10.10.10.212 worker2 <none> <none>
kube-system kube-proxy-vkb66 1/1 Running 0 9h 10.10.10.211 worker1 <none> <none>
kube-system kube-scheduler-master1 1/1 Running 0 10h 10.10.10.216 master1 <none> <none>
I see that coredns is work only in master with two pods
How i should do to replicate coredns in all my 3 vm (master + 2 nodes)
this is the description of coredns deployment
root#master1:# kubectl describe deployment coredns -n kube-system
Name: coredns
Namespace: kube-system
CreationTimestamp: Wed, 04 Nov 2020 20:32:10 +0000
Labels: k8s-app=kube-dns
Annotations: deployment.kubernetes.io/revision: 1
Selector: k8s-app=kube-dns
Replicas: 2 desired | 2 updated | 2 total | 2 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 1 max unavailable, 25% max surge
Pod Template:
Labels: k8s-app=kube-dns
Service Account: coredns
Containers:
coredns:
Image: k8s.gcr.io/coredns:1.7.0
Ports: 53/UDP, 53/TCP, 9153/TCP
Host Ports: 0/UDP, 0/TCP, 0/TCP
Args:
-conf
/etc/coredns/Corefile
Limits:
memory: 170Mi
Requests:
cpu: 100m
memory: 70Mi
Liveness: http-get http://:8080/health delay=60s timeout=5s period=10s #success=1 #failure=5
Readiness: http-get http://:8181/ready delay=0s timeout=1s period=10s #success=1 #failure=3
Environment: <none>
Mounts:
/etc/coredns from config-volume (ro)
Volumes:
config-volume:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: coredns
Optional: false
Priority Class Name: system-cluster-critical
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
Progressing True NewReplicaSetAvailable
OldReplicaSets: <none>
NewReplicaSet: coredns-f9fd979d6 (2/2 replicas created)
Events: <none>
also , the logs and the stats of deployment
root#master1# kubectl logs deployment/coredns -n kube-system
Found 2 pods, using pod/coredns-f9fd979d6-cggnh
.:53
[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
CoreDNS-1.7.0
linux/amd64, go1.14.4, f59c03d
root#master1:# kubectl get deployment -o wide -n kube-system
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
coredns 2/2 2 2 10h coredns k8s.gcr.io/coredns:1.7.0 k8s-app=kube-dns
Andre, you can add podAntiAffinity to your coredns definition:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchLabels:
k8s-app: kube-dns
topologyKey: kubernetes.io/hostname
This will let your coredns replicas scheduling to different nodes.

How to Exposing app in kubernetes with consul

We have installed consul through helm charts on k8 cluster. Here, I have deployed one consul server and the rest are consul agents.
kubectl get pods
NAME READY STATUS RESTARTS AGE
consul-7csp9 1/1 Running 0 4h
consul-connect-injector-webhook-deployment-66d46867f6-wqtt7 1/1 Running 0 4h
consul-server-0 1/1 Running 0 4h
consul-sync-catalog-85f5654b89-9qblx 1/1 Running 0 4h
consul-x4mqq 1/1 Running 0 4h
We see that the nodes are registered onto the Consul Server. http://XX.XX.XX.XX/ui/kube/nodes
We have deployed an hello world application onto k8 cluster. This will bring-up Hello-World
kubectl get pods
NAME READY STATUS RESTARTS AGE
consul-7csp9 1/1 Running 0 4h
consul-connect-injector-webhook-deployment-66d46867f6-wqtt7 1/1 Running 0 4h
consul-server-0 1/1 Running 0 4h
consul-sync-catalog-85f5654b89-9qblx 1/1 Running 0 4h
consul-x4mqq 1/1 Running 0 4h
sampleapp-69bf9f84-ms55k 2/2 Running 0 4h
Below is the yaml file.
apiVersion: apps/v1
kind: Deployment
metadata:
name: sampleapp
spec:
replicas: 1
selector:
matchLabels:
app: sampleapp
template:
metadata:
labels:
app: sampleapp
annotations:
"consul.hashicorp.com/connect-inject": "true"
spec:
containers:
- name: sampleapp
image: "docker-dev-repo.aws.com/sampleapp-java/helloworld-service:a8c9f65-65"
ports:
- containerPort: 8080
name: http
Successful deployment of sampleapp, I see that sampleapp-proxy is registered in consul. and sampleapp-proxy is listed in kubernetes services. (This is because the toConsul and toK8S are passed as true during installation)
kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
consul ExternalName <none> consul.service.test <none> 4h
consul-connect-injector-svc ClusterIP XX.XX.XX.XX <none> 443/TCP 4h
consul-dns ClusterIP XX.XX.XX.XX <none> 53/TCP,53/UDP 4h
consul-server ClusterIP None <none> 8500/TCP,8301/TCP,8301/UDP,8302/TCP,8302/UDP,8300/TCP,8600/TCP,8600/UDP 4h
consul-ui LoadBalancer XX.XX.XX.XX XX.XX.XX.XX 80:32648/TCP 4h
dns-test-proxy ExternalName <none> dns-test-proxy.service.test <none> 2h
fluentd-gcp-proxy ExternalName <none> fluentd-gcp-proxy.service.test <none> 33m
kubernetes ClusterIP XX.XX.XX.XX <none> 443/TCP 5d
sampleapp-proxy ExternalName <none> sampleapp-proxy.service.test <none> 4h
How can I access my sampleapp? Should I expose my application as kube service again?
Earlier, without consul, we used a create a service for the sampleapp and expose the service as ingress. Using the Ingress Loadbalancer, we used to access our application.
Consul does not provide any new ways to expose your apps. You need to create ingress Loadbalancer as before.