ingress-nginx working but nginx-ingress not - kubernetes

I have Keyclock installed on my Kubernetes cluster.
Default ingress which Keycloak creates looks like this.
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/backend-protocol: HTTPS
route.openshift.io/termination: passthrough
creationTimestamp: "2022-11-09T13:08:00Z"
generation: 1
labels:
app: keycloak
app.kubernetes.io/managed-by: keycloak-operator
name: keycloak-kc-ingress
namespace: default
ownerReferences:
- apiVersion: k8s.keycloak.org/v2alpha1
blockOwnerDeletion: true
controller: true
kind: Keycloak
name: keycloak-kc
uid: 67a18d00-4bee-4587-b330-cdaf21b39084
resourceVersion: "155002"
uid: 87c2aff4-1489-4ba9-bdf6-9fe1a288c800
spec:
defaultBackend:
service:
name: keycloak-kc-service
port:
number: 8443
rules:
- host: keycloak.example.com
http:
paths:
- backend:
service:
name: keycloak-kc-service
port:
number: 8443
pathType: ImplementationSpecific
status:
loadBalancer:
ingress:
- ip: 10.0.0.3
After installing ingress-nginx and adding kubernetes.io/ingress.class=nginx annotation, everything works.
For some reasons, however, I need to use nginx-ingress.
My new ingress looks like this.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
kubernetes.io/ingress.class: nginx
# nginx.ingress.kubernetes.io/backend-protocol: HTTPS
# route.openshift.io/termination: passthrough
labels:
app: keycloak
app.kubernetes.io/managed-by: keycloak-operator
# target: keycloak-kc-service
name: keycloak-kc-ingress
namespace: default
spec:
defaultBackend:
service:
name: keycloak-kc-service
port:
number: 8443
rules:
- host: accounts.example.com
http:
paths:
- backend:
service:
name: keycloak-kc-service
port:
number: 8443
path: /
pathType: Prefix
tls:
- hosts:
- accounts.example.com
secretName: keycloak-tls-secret
Unfortunately, this ingress returns the error "502 Bad Gateway".
We can't handle it. Please help.
Information for debugging
kubectl get deployments -A
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
default keycloak-operator 2/2 2 2 141m
kube-system cilium-operator 1/1 1 1 148m
kube-system coredns 2/2 2 2 148m
kube-system konnectivity-agent 2/2 2 2 148m
kube-system metrics-server 2/2 2 2 148m
kubernetes-dashboard dashboard-metrics-scraper 2/2 2 2 148m
nginx-ingress nginx-ingress-nginx-ingress-nginx-ingress 1/1 1 1 127m
olm catalog-operator 1/1 1 1 142m
olm olm-operator 1/1 1 1 142m
olm packageserver 2/2 2 2 142m
kubectl get services -A
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default keycloak-kc-discovery ClusterIP None <none> 7800/TCP 114m
default keycloak-kc-service ClusterIP 10.240.18.67 <none> 8443/TCP 114m
default keycloak-operator ClusterIP 10.240.24.103 <none> 80/TCP 141m
default kubernetes ClusterIP 10.240.16.1 <none> 443/TCP 149m
default postgres-db ClusterIP 10.240.18.157 <none> 5432/TCP 140m
kube-system hcloud-csi-controller-metrics ClusterIP 10.240.30.190 <none> 9189/TCP 149m
kube-system hcloud-csi-node-metrics ClusterIP 10.240.26.123 <none> 9189/TCP 149m
kube-system kube-dns ClusterIP 10.240.16.10 <none> 53/TCP,53/UDP 149m
kube-system metrics-server ClusterIP 10.240.31.184 <none> 443/TCP 149m
kubernetes-dashboard dashboard-metrics-scraper ClusterIP 10.240.25.29 <none> 8000/TCP 149m
nginx-ingress nginx-ingress-nginx-ingress-nginx-ingress LoadBalancer 10.240.26.173 10.0.0.3,167.235.123.123,2a01:4f8:1c1f:6484::1 80:31670/TCP,443:30557/TCP 128m
olm operatorhubio-catalog ClusterIP 10.240.22.30 <none> 50051/TCP 142m
olm packageserver-service ClusterIP 10.240.23.246 <none>
Unfortunately, this ingress returns the error "502 Bad Gateway".
We can't handle it. Please help.

Related

404 Not Found error after configuring the Nginx Ingress Controller

UPDATE:
The issue persists but I used another way (sub-domain name, instead of the path) to 'bypass' the issue:
ubuntu#df1:~$ cat k8s-dashboard-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: k8s-dashboard-ingress
namespace: kubernetes-dashboard
annotations:
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/secure-backends: "true"
spec:
ingressClassName: nginx
tls:
- hosts:
- dashboard.XXXX
secretName: df1-tls
rules:
- host: dashboard.XXXX
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: kubernetes-dashboard
port:
number: 443
This error bothers me for some time and I hope with your help I can come down to the bottom of it.
I have one K8S cluster (single node so far, to avoid any network related issues). I installed Grafana on it.
All pods are running fine:
ubuntu:~$ k get po -A
NAMESPACE NAME READY STATUS RESTARTS AGE
default grafana-646c8874cb-h6tc5 1/1 Running 0 11h
default nginx-1-7bdc99b884-xh7kl 1/1 Running 0 36h
kube-system coredns-64897985d-4sk6l 1/1 Running 0 2d16h
kube-system coredns-64897985d-dx5h6 1/1 Running 0 2d16h
kube-system etcd-df1 1/1 Running 1 3d14h
kube-system kilo-kb52f 1/1 Running 0 2d16h
kube-system kube-apiserver-df1 1/1 Running 1 3d14h
kube-system kube-controller-manager-df1 1/1 Running 4 3d14h
kube-system kube-flannel-ds-fjkxv 1/1 Running 0 3d13h
kube-system kube-proxy-bd2xt 1/1 Running 0 3d14h
kube-system kube-scheduler-df1 1/1 Running 10 3d14h
kubernetes-dashboard dashboard-metrics-scraper-799d786dbf-5skdw 1/1 Running 0 2d16h
kubernetes-dashboard kubernetes-dashboard-6b6b86c4c5-56zp2 1/1 Running 0 2d16h
nginx-ingress nginx-ingress-5b467c7d7-qtqtq 1/1 Running 0 2d15h
As you saw, I installed nginx ingress controller.
Here is the ingress:
ubuntu:~$ k describe ing grafana
Name: grafana
Labels: app.kubernetes.io/instance=grafana
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=grafana
app.kubernetes.io/version=8.3.3
helm.sh/chart=grafana-6.20.5
Namespace: default
Address:
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
Host Path Backends
---- ---- --------
kalepa.k8s.io
/grafana grafana:80 (10.244.0.14:3000)
Annotations: meta.helm.sh/release-name: grafana
meta.helm.sh/release-namespace: default
Events: <none>
Here is the service that is defined in above ingress:
ubuntu:~$ k get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
grafana ClusterIP 10.96.148.1 <none> 80/TCP 11h
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3d14h
If I do a curl to the cluster ip of the service, it goes through without an issue:
ubuntu:~$ curl 10.96.148.1
Found.
If I do a curl to the hostname with the path to the service, I got the 404 error:
ubuntu:~$ curl kalepa.k8s.io/grafana
<html>
<head><title>404 Not Found</title></head>
<body>
<center><h1>404 Not Found</h1></center>
<hr><center>nginx/1.21.5</center>
</body>
</html>
The hostname is resolved to the cluster ip of the nginx ingress service (nodeport):
ubuntu:~$ grep kalepa.k8s.io /etc/hosts
10.96.241.112 kalepa.k8s.io
This is the nginx ingress service definition:
ubuntu:~$ k describe -n nginx-ingress svc nginx-ingress
Name: nginx-ingress
Namespace: nginx-ingress
Labels: <none>
Annotations: <none>
Selector: app=nginx-ingress
Type: NodePort
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.96.241.112
IPs: 10.96.241.112
Port: http 80/TCP
TargetPort: 80/TCP
NodePort: http 31803/TCP
Endpoints: 10.244.0.6:80
Port: https 443/TCP
TargetPort: 443/TCP
NodePort: https 31913/TCP
Endpoints: 10.244.0.6:443
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
What am I missing? Thanks for your help!
This is happening as you are using /grafana and this path does not exist in the grafana application - hence 404. You need to first configure grafana to use this context path before you can forward your traffic to /grafana.
If you use / as path, it will work. That's why curl 10.96.148 works as you are not adding a route /grafana. But most likely that path is already used by some other service, that's why you were using /grafana to begin with.
Therefore, you need to update your grafana.ini file to set the context root explicitly as shown below.
You may put your grafana.ini in a configmap, mount it to the original grafana.ini location and recreate the deployment.
[server]
domain = kalepa.k8s.io
root_url = http://kalepa.k8s.io/grafana/
I can see there is no ingressClassName specified for your ingress. It looks something like this:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx-ingress
spec:
ingressClassName: nginx
tls:
- hosts:
- kalepa.k8s.io
secretName: secret_name
rules:
- host: kalepa.k8s.io
http:
paths:
...

Kubernetes Istio ingress gateway responds with 503 always

I'm configuring Istio using Helm. Here you can find my istio-config.yaml:
global:
proxy:
accessLogFile: "/dev/stdout"
resources:
requests:
cpu: 10m
memory: 40Mi
disablePolicyChecks: false
sidecarInjectorWebhook:
enabled: true
rewriteAppHTTPProbe: false
pilot:
autoscaleEnabled: false
traceSampling: 100.0
resources:
requests:
cpu: 10m
memory: 100Mi
mixer:
policy:
enabled: true
autoscaleEnabled: false
resources:
requests:
cpu: 10m
memory: 100Mi
telemetry:
enabled: true
autoscaleEnabled: false
resources:
requests:
cpu: 50m
memory: 100Mi
adapters:
stdio:
enabled: true
grafana:
enabled: true
tracing:
enabled: true
kiali:
enabled: true
createDemoSecret: true
gateways:
istio-ingressgateway:
autoscaleEnabled: false
resources:
requests:
cpu: 10m
memory: 40Mi
istio-egressgateway:
enabled: true
autoscaleEnabled: false
resources:
requests:
cpu: 10m
memory: 40Mi
global:
controlPlaneSecurityEnabled: false
mtls:
enabled: false
Then I deployed a bunch of microservices using istioctl, all of them are simple REST call using HTTP. They can communicate with each other without any issue. If I exposed them with NodePorts I can reach and communicate with them correctly.
Here are my Services:
$ kubectl get svc --all-namespaces
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default activemq ClusterIP None <none> 61616/TCP 3h17m
default activemq-np NodePort 10.110.76.147 <none> 8161:30061/TCP 3h17m
default api-exchange ClusterIP None <none> 8080/TCP 3h16m
default api-response ClusterIP None <none> 8080/TCP 3h16m
default authorization-server ClusterIP None <none> 8080/TCP 3h17m
default de-communication ClusterIP None <none> 8080/TCP 3h16m
default gateway ClusterIP None <none> 8080/TCP 3h17m
default gateway-np NodePort 10.96.123.57 <none> 8080:30080/TCP 3h17m
default identity ClusterIP None <none> 88/TCP,8080/TCP 3h18m
default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3h19m
default matchengine ClusterIP None <none> 8080/TCP 3h16m
default monitor-redis ClusterIP None <none> 8081/TCP 3h17m
default monitor-redis-np NodePort 10.106.178.13 <none> 8081:30082/TCP 3h17m
default postgres ClusterIP None <none> 5432/TCP 3h18m
default postgres-np NodePort 10.106.223.216 <none> 5432:30032/TCP 3h18m
default redis ClusterIP None <none> 6379/TCP 3h18m
default redis-np NodePort 10.101.167.194 <none> 6379:30079/TCP 3h18m
default synchronization ClusterIP None <none> 8080/TCP 3h15m
default tx-flow ClusterIP None <none> 8080/TCP 3h15m
default tx-manager ClusterIP None <none> 8080/TCP 3h15m
default tx-scheduler ClusterIP None <none> 8080/TCP 3h15m
default ubc-config ClusterIP None <none> 8080/TCP 3h16m
default ubc-services-config ClusterIP None <none> 8888/TCP 3h18m
default ubc-services-config-np NodePort 10.110.11.213 <none> 8888:30088/TCP 3h18m
default user-admin ClusterIP None <none> 8080/TCP 3h17m
default web-exchange-np NodePort 10.105.244.194 <none> 80:30081/TCP 3h15m
istio-system grafana ClusterIP 10.97.134.230 <none> 3000/TCP 3h22m
istio-system istio-citadel ClusterIP 10.99.159.56 <none> 8060/TCP,15014/TCP 3h22m
istio-system istio-egressgateway ClusterIP 10.97.71.204 <none> 80/TCP,443/TCP,15443/TCP 3h22m
istio-system istio-galley ClusterIP 10.98.111.27 <none> 443/TCP,15014/TCP,9901/TCP 3h22m
istio-system istio-ingressgateway LoadBalancer 10.96.182.202 <pending> 15020:30936/TCP,80:31380/TCP,443:31390/TCP,31400:31400/TCP,15029:31913/TCP,15030:30606/TCP,15031:32127/TCP,15032:30362/TCP,15443:31416/TCP 3h22m
istio-system istio-pilot ClusterIP 10.101.117.169 <none> 15010/TCP,15011/TCP,8080/TCP,15014/TCP 3h22m
istio-system istio-policy ClusterIP 10.97.247.54 <none> 9091/TCP,15004/TCP,15014/TCP 3h22m
istio-system istio-sidecar-injector ClusterIP 10.101.219.141 <none> 443/TCP 3h22m
istio-system istio-telemetry ClusterIP 10.109.108.78 <none> 9091/TCP,15004/TCP,15014/TCP,42422/TCP 3h22m
istio-system jaeger-agent ClusterIP None <none> 5775/UDP,6831/UDP,6832/UDP 3h22m
istio-system jaeger-collector ClusterIP 10.97.255.231 <none> 14267/TCP,14268/TCP 3h22m
istio-system jaeger-query ClusterIP 10.104.80.162 <none> 16686/TCP 3h22m
istio-system kiali ClusterIP 10.104.41.71 <none> 20001/TCP 3h22m
istio-system kiali-np NodePort 10.100.99.141 <none> 20001:30085/TCP 29h
istio-system prometheus ClusterIP 10.110.46.60 <none> 9090/TCP 3h22m
istio-system tracing ClusterIP 10.111.173.205 <none> 80/TCP 3h22m
istio-system zipkin ClusterIP 10.101.144.199 <none> 9411/TCP 3h22m
kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 54d
kube-system tiller-deploy ClusterIP 10.105.162.195 <none> 44134/TCP 24d
I created an ingress gateway and one VirtualService to route calls from outside the cluster. Here are my gateway and Virtual Services configurations:
Gateway:
$ kubectl describe gateway iris-gateway
Name: iris-gateway
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"networking.istio.io/v1alpha3","kind":"Gateway","metadata":{"annotations":{},"name":"iris-gateway","namespace":"default"},"s...
API Version: networking.istio.io/v1alpha3
Kind: Gateway
Metadata:
Creation Timestamp: 2019-08-23T17:25:20Z
Generation: 1
Resource Version: 7093263
Self Link: /apis/networking.istio.io/v1alpha3/namespaces/default/gateways/iris-gateway
UID: 4c4fac7d-a698-4c9c-97e6-ebc7416c96a8
Spec:
Selector:
Istio: ingressgateway
Servers:
Hosts:
*
Port:
Name: http
Number: 80
Protocol: HTTP
Events: <none>
Virtual Services:
$ kubectl describe virtualservice apiexg
Name: apiexg
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"networking.istio.io/v1alpha3","kind":"VirtualService","metadata":{"annotations":{},"name":"apiexg","namespace":"default"},"...
API Version: networking.istio.io/v1alpha3
Kind: VirtualService
Metadata:
Creation Timestamp: 2019-08-23T19:26:16Z
Generation: 1
Resource Version: 7107510
Self Link: /apis/networking.istio.io/v1alpha3/namespaces/default/virtualservices/apiexg
UID: 861bca0d-be98-4bfb-bf92-b2bd2f1b703f
Spec:
Gateways:
iris-gateway
Hosts:
*
Http:
Match:
Uri:
Prefix: /api-exchange
Route:
Destination:
Host: api-exchange.default.svc.cluster.local
Port:
Number: 8080
Events: <none>
When I make a call to the service I always got a 503 Service Unavailable:
curl -X POST http://172.30.7.129:31380/api-exchange/ -vvv
* About to connect() to 172.30.7.129 port 31380 (#0)
* Trying 172.30.7.129...
* Connected to 172.30.7.129 (172.30.7.129) port 31380 (#0)
> POST /api-exchange/ HTTP/1.1
> User-Agent: curl/7.29.0
> Host: 172.30.7.129:31380
> Accept: */*
>
< HTTP/1.1 503 Service Unavailable
< content-length: 19
< content-type: text/plain
< date: Fri, 23 Aug 2019 21:49:33 GMT
< server: istio-envoy
<
* Connection #0 to host 172.30.7.129 left intact
no healthy upstream
Here is log output for istio-ingressgateway pod:
[2019-08-23 21:49:34.185][38][warning][upstream] [external/envoy/source/common/upstream/original_dst_cluster.cc:110] original_dst_load_balancer: No downstream connection or no original_dst.
Versions:
$ istioctl version --remote
client version: 1.2.4
citadel version: 1.2.4
egressgateway version: 94746ccd404a8e056483dd02e4e478097b950da6-dirty
galley version: 1.2.4
ingressgateway version: 94746ccd404a8e056483dd02e4e478097b950da6-dirty
pilot version: 1.2.4
policy version: 1.2.4
sidecar-injector version: 1.2.4
telemetry version: 1.2.4
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.1", GitCommit:"4485c6f18cee9a5d3c3b4e523bd27972b1b53892", GitTreeState:"clean", BuildDate:"2019-07-18T09:18:22Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.0", GitCommit:"e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529", GitTreeState:"clean", BuildDate:"2019-06-19T16:32:14Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
Istio installation:
$ helm install /opt/istio-1.2.4/install/kubernetes/helm/istio-init --name istio-init --namespace istio-system
$ helm install /opt/istio-1.2.4/install/kubernetes/helm/istio --name istio --namespace istio-system --values istio-config/istio-config.yaml
Environment:
I did the same configuration over a Oracle Virtual Appliance Virtual Server with RHEL 7 and over a cluster of 3 Physical Servers with RHEL 7.
I solve this problem. istio-gateway was not capable to do redirect due to one of my services have a ClusterIP assigned:
$ kubectl get svc --all-namespaces
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default activemq ClusterIP None <none> 61616/TCP 3h17m
default api-exchange ClusterIP None <none> 8080/TCP 3h16m
default api-response ClusterIP None <none> 8080/TCP 3h16m
default authorization-server ClusterIP None <none> 8080/TCP 3h17m
default de-communication ClusterIP None <none> 8080/TCP 3h16m
default gateway ClusterIP None <none> 8080/TCP 3h17m
default identity ClusterIP None <none> 88/TCP,8080/TCP 3h18m
default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3h19m
default matchengine ClusterIP None <none> 8080/TCP 3h16m
default monitor-redis ClusterIP None <none> 8081/TCP 3h17m
default postgres ClusterIP None <none> 5432/TCP 3h18m
default redis ClusterIP None <none> 6379/TCP 3h18m
default synchronization ClusterIP None <none> 8080/TCP 3h15m
default tx-flow ClusterIP None <none> 8080/TCP 3h15m
default tx-manager ClusterIP None <none> 8080/TCP 3h15m
default tx-scheduler ClusterIP None <none> 8080/TCP 3h15m
default ubc-config ClusterIP None <none> 8080/TCP 3h16m
default ubc-services-config ClusterIP None <none> 8888/TCP 3h18m
default user-admin ClusterIP None <none> 8080/TCP 3h17m
Here one of my YAML with ClusterIP: None:
apiVersion: v1
kind: Service
metadata:
name: ubc-config
labels:
app: ubc-config
spec:
clusterIP: None
ports:
- port: 8080
name: ubc-config
selector:
app: ubc-config
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: ubc-config
spec:
selector:
matchLabels:
app: ubc-config
replicas: 1
template:
metadata:
labels:
app: ubc-config
spec:
containers:
- name: ubc-config
image: ubc-config
ports:
- containerPort: 8080
As you can see, Service.spec.ClusterIP is set to NONE. To solve the problem I only change my YAML configuration to:
apiVersion: v1
kind: Service
metadata:
name: ubc-config
labels:
app: ubc-config
spec:
ports:
- port: 8080
name: http-ubcconfig
selector:
app: ubc-config
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: ubc-config
spec:
selector:
matchLabels:
app: ubc-config
replicas: 1
template:
metadata:
labels:
app: ubc-config
spec:
containers:
- name: ubc-config
image: ubc-config
ports:
- containerPort: 8080
name: http-ubcconfig
I hope this helps someone.

how to access istio mesh from browser

I'm trying to inject istio into my kubernetes in minikube environment on my local ubuntu 16.04 system. this is my deployment yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nodejs-master
labels:
run: nodejs-master
spec:
replicas: 1
template:
metadata:
labels:
run: nodejs-master
spec:
containers:
- name: nodejs-master
image: hegdemahendra9/nodejs-master:v1
ports:
- containerPort: 8080
protocol: TCP
---
kind: Service
apiVersion: v1
metadata:
name: nodejs-master
spec:
selector:
run: nodejs-master
ports:
- name: port1
protocol: TCP
port: 8080
targetPort: 8080
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nodejs-slave
labels:
run: nodejs-slave
spec:
replicas: 1
template:
metadata:
labels:
run: nodejs-slave
spec:
containers:
- name: nodejs-slave
image: hegdemahendra9/nodejs-slave:v1
ports:
- containerPort: 8081
protocol: TCP
---
kind: Service
apiVersion: v1
metadata:
name: nodejs-slave
spec:
selector:
run: nodejs-slave
ports:
- name: port1
protocol: TCP
port: 8081
targetPort: 8081
type: NodePort
I've enabled automatic sidecar injection and ran $kubect apply -f deployment.yaml
I've installed istio via this method
here's my istio installation details :
$ kubectl get pods -n istio-system
NAME READY STATUS RESTARTS AGE
istio-citadel-6d7f9c545b-r665q 1/1 Running 0 2h
istio-cleanup-secrets-qg4zh 0/1 Completed 0 2h
istio-egressgateway-866885bb49-9l5rx 1/1 Running 0 2h
istio-galley-6d74549bb9-jslss 1/1 Running 0 2h
istio-ingressgateway-6c6ffb7dc8-rzvxb 1/1 Running 0 2h
istio-pilot-685fc95d96-6296x 0/2 Pending 0 2h
istio-policy-688f99c9c4-trg2j 2/2 Running 0 2h
istio-security-post-install-gs6vk 0/1 Completed 0 2h
istio-sidecar-injector-74855c54b9-j94qr 1/1 Running 0 2h
istio-telemetry-69b794ff59-rqbzw 2/2 Running 0 2h
prometheus-f556886b8-kj5ks 1/1 Running 0 2h
$ kubectl get svc -n istio-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
istio-citadel ClusterIP 10.108.144.211 <none> 8060/TCP,9093/TCP 2h
istio-egressgateway NodePort 10.99.160.138 <none> 80:32415/TCP,443:32480/TCP 2h
istio-galley ClusterIP 10.97.0.188 <none> 443/TCP,9093/TCP 2h
istio-ingressgateway NodePort 10.97.75.20 <none> 80:31380/TCP,443:31390/TCP,31400:31400/TCP,15011:32188/TCP,8060:31372/TCP,853:31197/TCP,15030:30606/TCP,15031:31026/TCP 2h
istio-pilot ClusterIP 10.106.145.225 <none> 15010/TCP,15011/TCP,8080/TCP,9093/TCP 2h
istio-policy ClusterIP 10.110.104.100 <none> 9091/TCP,15004/TCP,9093/TCP 2h
istio-sidecar-injector ClusterIP 10.99.236.121 <none> 443/TCP 2h
istio-telemetry ClusterIP 10.103.92.170 <none> 9091/TCP,15004/TCP,9093/TCP,42422/TCP 2h
prometheus ClusterIP 10.105.31.126 <none> 9090/TCP
here's my deployment details
$kubectl get pods
NAME READY STATUS RESTARTS AGE
nodejs-master-6494d9dd66-pdbd6 2/2 Running 0 2h
nodejs-slave-599cd5d676-6w4s8 2/2 Running 0 2h
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 1d
nodejs-master ClusterIP 10.104.99.240 <none> 8080/TCP 2h
nodejs-slave NodePort 10.101.120.229 <none> 8081:31263/TCP 2h
Here's my gateway yaml
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: ms-gateway
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: mater-slave
spec:
hosts:
- "*"
gateways:
- ms-gateway
http:
- match:
- uri:
prefix: /master
route:
- destination:
host: nodejs-master
port:
number: 8080
I've applied my gateway using kubectl apply command. and trying to access it using
http://($minikube ip):kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(#.name=="http2")].nodePort}'/master
i.e http://192.168.99.100:31380/master
but I'm getting connection refused error. Someone please help.
thanks in advance.
Maybe it's the name of the service port. It should be "tcp-*". https://istio.io/docs/setup/kubernetes/additional-setup/requirements/

What is URL for Kibana UI

http://grs-preprodkubemaster01:5601/kibana
I have followed docs and installed Kibana, When I used the service as type: LoadBalancer, the service isn't
coming up, so I deleted the type: LoadBalancer and let it default to ClusterIP, it came up fine. (Note I don't have AWS)
But, I am not sure how to access the UI, I tried this URL but its not working.
http://my-preprodkubemaster01/api/v1/proxy/namespaces/kube-system/services/elasticsearch-logging/app/kibana
any ideas how to access the Kibana UI. I checked service, deployment and everything is green check.
Another thing I tried is this URL with this URL which I got from the command kubectl cluster-info
https://10.123.24.107:6443/api/v1/namespaces/kube-system/services/kibana-logging/proxy
However, this is showing me this error
{
kind: "Status",
apiVersion: "v1",
metadata: { },
status: "Failure",
message: "services "kibana-logging" is forbidden: User "system:anonymous" cannot get services/proxy in the namespace "kube-system"",
reason: "Forbidden",
details: {
name: "kibana-logging",
kind: "services"
},
code: 403
}
So, as another try I used Kibana service as NodePort, but that didn't work either.
apiVersion: v1
kind: Service
metadata:
name: kibana-logging
namespace: kube-system
labels:
k8s-app: kibana-logging
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
kubernetes.io/name: "Kibana"
spec:
selector:
k8s-app: kibana-logging
type: NodePort
ports:
- port: 5601
protocol: TCP
targetPort: ui
nodePort: 30887
$ kubectl -n kube-system get rc,svc,cm,po
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
svc/elasticsearch-logging ClusterIP 10.98.10.182 <none> 9200/TCP 12m
svc/heapster ClusterIP 10.107.184.85 <none> 80/TCP 3d
svc/kibana-logging NodePort 10.102.254.129 <none> 5601:30887/TCP 12m
svc/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 3d
svc/kubernetes-dashboard ClusterIP 10.105.30.246 <none> 80/TCP 3d
svc/monitoring-influxdb ClusterIP 10.109.144.39 <none> 8086/TCP 3d
I would like to know what URL I should be using to access the Kibana UI. Please note that I have npot tried to do kubectl proxy and I would like to have it work without it
Use the NodePort you defined in your service:
https://10.123.24.107:30887
The most common way to expose internal server outside the cluster is an Ingress.
First, you need to have an Ingress controller running in your Kubernetes cluster.
There are two types of maintained Ingress controllers - GCE and nginx
Then, you need to create a yaml file as shown below and change it according to your needs:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-ingress
spec:
backend:
serviceName: testsvc
servicePort: 80
When you create it using kubectl create -f, you should see something like this:
$ kubectl get ingress
NAME RULE BACKEND ADDRESS
test-ingress - testsvc:80 1.2.3.4
In this example, 1.2.3.4 is the IP allocated by Ingress controller.
When you have all things in place, you'll be able to access your application (Kibana) by IP 1.2.3.4
Please find more examples and use cases in Ingress documentation
You can also expose a Kubernetes service without using the Ingress resource:
Service.Type=LoadBalancer
Service.Type=NodePort
Port Proxy
I got it to work with these changes in ingress config
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: kube
namespace: kube-system
annotations:
kubernetes.io/ingress.class: nginx
nginx.org/rewrites: "serviceName=kubernetes-dashboard rewrite=/;serviceName=kibana-logging rewrite=/"
spec:
rules:
- host: HOSTNAME_OF_MASTER
http:
paths:
- path: /kube-ui/
backend:
serviceName: kubernetes-dashboard
servicePort: 80
- path: /kibana/
backend:
serviceName: kibana-logging
servicePort: 5601
and my Kibana serive is setup as Nodeport
apiVersion: v1
kind: Service
metadata:
name: kibana-logging
namespace: kube-system
labels:
k8s-app: kibana-logging
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
kubernetes.io/name: "Kibana"
spec:
type: NodePort
ports:
- port: 5601
protocol: TCP
targetPort: ui
selector:
k8s-app: kibana-logging
and dashboard is also configured as this
# ------------------- Dashboard Service ------------------- #
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:
type: NodePort
ports:
- port: 80
targetPort: 9090
selector:
k8s-app: kubernetes-dashboard
once you have the svc running you can access kibana using the NodePort from any node. Example: http://node01_ip: 31325/app/kibana
$ kubectl get svc -o wide -n=kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
elasticsearch-logging ClusterIP 10.xx.120.130 <none> 9200/TCP 11h k8s-app=elasticsearch-logging
heapster ClusterIP 10.xx.232.165 <none> 80/TCP 11h k8s-app=heapster
kibana-logging NodePort 10.xx.39.255 <none> 5601:31325/TCP 11h k8s-app=kibana-logging
kube-dns ClusterIP 10.xx.0.xx <none> 53/UDP,53/TCP 12h k8s-app=kube-dns
kubernetes-dashboard NodePort 10.xx.xx.xx <none> 80:32086/TCP 11h k8s-app=kubernetes-dashboard
monitoring-influxdb ClusterIP 10.13.199.138 <none> 8086/TCP 11h k8s-app=influxdb

kubernetes ingress nginx controller

In kubernetes kubeadm cluster i deployed my app with :
kubectl run gmt-dpl --image=192.168.56.33:5000/img:gmt --port 8181
Expose the app with:
kubectl expose deployment gmt-dpl --name=gmt-svc --type=NodePort --port=443 --target-port=8181
My gmt-svc:
Name: gmt-svc
Namespace: default
Labels: run=gmt-dpl
Annotations: <none>
Selector: run=gmt-dpl
Type: NodePort
IP: 10.96.74.133
Port: <unset> 443/TCP
TargetPort: 8181/TCP
NodePort: <unset> 30723/TCP
Endpoints: 10.44.0.17:8181
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
I'm able to access my app on https://noedip:30723
Following github doc i manage to install nginx- controller:
NAMESPACE NAME READY STATUS RESTARTS AGE
default gmt-dpl-84fb9cfd8d-vz2bw 1/1 Running 0 1h
ingress-nginx default-http-backend-55c6c69b88-c4k6c 1/1 Running 5 2d
ingress-nginx nginx-ingress-controller-9c7b694-szgjd 1/1 Running 8 2d
kube-system etcd-k8s-master 1/1 Running 5 2d
kube-system kube-apiserver-k8s-master 1/1 Running 1 4h
kube-system kube-controller-manager-k8s-master 1/1 Running 7 2d
kube-system kube-dns-6f4fd4bdf-px5f8 3/3 Running 12 2d
kube-system kube-proxy-jswfx 1/1 Running 8 2d
kube-system kube-proxy-n2chh 1/1 Running 4 2d
kube-system kube-scheduler-k8s-master 1/1 Running 7 2d
kube-system weave-net-4k8sp 2/2 Running 10 1d
kube-system weave-net-bqjzb 2/2 Running 19 1d
I exposed the nginx-controller:
apiVersion: v1
kind: Service
metadata:
name: nginx-ingress-controller-svc
namespace: ingress-nginx
spec:
type: NodePort
ports:
- port: 80
nodePort: 30001
name: http
- port: 443
nodePort: 30000
name: https
- port: 18080
nodePort: 30002
name: status
selector:
app: ingress-nginx
And i created an ingress ressource:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/tls-acme: "true"
kubernetes.io/ingress.class: "nginx"
name: test-ingress
spec:
tls:
- hosts:
- kube.gmt.dz
secretName: foo-secret
rules:
- host: kube.gmt.dz
http:
paths:
- path: /*
backend:
serviceName: gmt-svc
servicePort: 443
Now while trying to access my app through the nginx-controller with https://ip_master:30000 i got default backend -404 and also with http://ip_master:30001 the same error knowing that my app is only accessible in https.
In ingress ressource i tried servicePort: 8181 but i got the same error.
Any ideas ?