kubernetes - unable to expose Prometheus using NodePort - kubernetes

I have Prometheus installed on GCP, and i'm able to do a port-forward and access the Prometheus UI
Prometheus Pods, Events on GCP :
Karans-MacBook-Pro:prometheus-yamls karanalang$ kc get pods -n monitoring -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
grafana-5ccfb68647-8fjrz 0/1 Terminated 0 28h <none> gke-strimzi-prometheus-default-pool-38ca804d-nfvm <none> <none>
grafana-5ccfb68647-h7vbr 1/1 Running 0 5h24m 10.76.0.9 gke-strimzi-prometheus-default-pool-38ca804d-zzl9 <none> <none>
prometheus-operator-85d84bb848-hw6d5 1/1 Running 0 5h24m 10.76.0.4 gke-strimzi-prometheus-default-pool-38ca804d-zzl9 <none> <none>
prometheus-operator-85d84bb848-znjs6 0/1 Terminated 0 28h <none> gke-strimzi-prometheus-default-pool-38ca804d-nfvm <none> <none>
prometheus-prometheus-0 2/2 Running 0 5h24m 10.76.0.10 gke-strimzi-prometheus-default-pool-38ca804d-zzl9 <none> <none>
prometheus-prometheus-1 2/2 Running 0 5h24m 10.76.0.7 gke-strimzi-prometheus-default-pool-38ca804d-zzl9 <none> <none>
prometheus-prometheus-2 2/2 Running 0 5h24m 10.76.0.11 gke-strimzi-prometheus-default-pool-38ca804d-zzl9 <none> <none>
Karans-MacBook-Pro:prometheus-yamls karanalang$ kc get endpoints -n monitoring
NAME ENDPOINTS AGE
grafana 10.76.0.9:3000 28h
grafana-lb 10.76.0.9:3000 54m
prometheus-lb 10.76.0.10:9090,10.76.0.11:9090,10.76.0.7:9090 155m
prometheus-nodeport 10.76.0.10:9090,10.76.0.11:9090,10.76.0.7:9090 149m
prometheus-operated 10.76.0.10:9090,10.76.0.11:9090,10.76.0.7:9090 28h
prometheus-operator 10.76.0.4:8080 29h
I've create a NodePort(port 30900), and also create a firewall allowing ingress to the port 30900
Karans-MacBook-Pro:prometheus-yamls karanalang$ kc get svc -n monitoring | grep prometheus-nodeport
prometheus-nodeport NodePort 10.80.7.195 <none> 9090:30900/TCP 146m
However, when i try to access using http://<node_ip>:30900,
the url is not accessible.
Also, telnet to the host/port is not working
Karans-MacBook-Pro:prometheus-yamls karanalang$ telnet 10.76.0.11 30900
Trying 10.76.0.11...
Karans-MacBook-Pro:prometheus-yamls karanalang$ ping 10.76.0.7
PING 10.76.0.7 (10.76.0.7): 56 data bytes
Request timeout for icmp_seq 0
Here is the yaml used to create the NodePort (in monitoring namespace)
apiVersion: v1
kind: Service
metadata:
name: prometheus-nodeport
spec:
type: NodePort
ports:
- name: web
nodePort: 30900
port: 9090
protocol: TCP
targetPort: 9090
selector:
prometheus: prometheus
Any ideas on what the issue is ?
How do i debug/resolve this ?

Karans-MacBook-Pro:prometheus-yamls karanalang$ telnet 10.76.0.11
30900 Trying 10.76.0.11...
Karans-MacBook-Pro:prometheus-yamls karanalang$ ping 10.76.0.7 PING
10.76.0.7 (10.76.0.7): 56 data bytes
The IP that you used above appeared to be in the Pod CIDR range when judged from the EndPoints result in the question. These are not the worker node IP, which means you need to first check if you can reach any of the worker node over the network that you reside now (home? vpn? internet?), and the worker node already has the correct port (30900) opened.

Related

Getting prometheus/grafana and k3s to work together

T learn kubernetes I've built myself a bare metal cluster using 4 Raspberry PIs set it up using k3s:
# curl -sfL https://get.k3s.io | sh -
Added nodes etc., and everything comes up and I can see all the nodes and almost everything is working as expected.
I wanted to monitor the PIs so I installed the kube-prometheus-stack with helm:
$ kubectl create namespace monitoring
$ helm install prometheus --namespace monitoring prometheus-community/kube-prometheus-stack
And now everything looks fantastic:
$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system helm-install-traefik-crd-s8zw5 0/1 Completed 0 5d21h
kube-system helm-install-traefik-rc9f2 0/1 Completed 1 5d21h
monitoring prometheus-prometheus-node-exporter-j85rw 1/1 Running 10 28h
kube-system metrics-server-86cbb8457f-mvbkl 1/1 Running 12 5d21h
kube-system coredns-7448499f4d-t7sp8 1/1 Running 13 5d21h
monitoring prometheus-prometheus-node-exporter-mmh2q 1/1 Running 9 28h
monitoring prometheus-prometheus-node-exporter-j4k4c 1/1 Running 10 28h
monitoring alertmanager-prometheus-kube-prometheus-alertmanager-0 2/2 Running 10 28h
kube-system svclb-traefik-zkqd6 2/2 Running 6 19h
monitoring prometheus-prometheus-node-exporter-bft5t 1/1 Running 10 28h
kube-system local-path-provisioner-5ff76fc89d-g8tm6 1/1 Running 12 5d21h
kube-system svclb-traefik-jcxd2 2/2 Running 28 5d21h
kube-system svclb-traefik-mpbjm 2/2 Running 22 5d21h
kube-system svclb-traefik-7kxtw 2/2 Running 20 5d21h
monitoring prometheus-grafana-864598fd54-9548l 2/2 Running 10 28h
kube-system traefik-65969d48c7-9lh9m 1/1 Running 3 19h
monitoring prometheus-prometheus-kube-prometheus-prometheus-0 2/2 Running 10 28h
monitoring prometheus-kube-state-metrics-76f66976cb-m8k2h 1/1 Running 6 28h
monitoring prometheus-kube-prometheus-operator-5c758db547-zsv4s 1/1 Running 6 28h
The services are all there:
$ kubectl get services --all-namespaces
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 5d21h
kube-system kube-dns ClusterIP 10.43.0.10 <none> 53/UDP,53/TCP,9153/TCP 5d21h
kube-system metrics-server ClusterIP 10.43.80.65 <none> 443/TCP 5d21h
kube-system prometheus-kube-prometheus-kube-proxy ClusterIP None <none> 10249/TCP 28h
kube-system prometheus-kube-prometheus-kube-scheduler ClusterIP None <none> 10251/TCP 28h
monitoring prometheus-kube-prometheus-operator ClusterIP 10.43.180.73 <none> 443/TCP 28h
kube-system prometheus-kube-prometheus-coredns ClusterIP None <none> 9153/TCP 28h
kube-system prometheus-kube-prometheus-kube-etcd ClusterIP None <none> 2379/TCP 28h
kube-system prometheus-kube-prometheus-kube-controller-manager ClusterIP None <none> 10252/TCP 28h
monitoring prometheus-kube-prometheus-alertmanager ClusterIP 10.43.195.99 <none> 9093/TCP 28h
monitoring prometheus-prometheus-node-exporter ClusterIP 10.43.171.218 <none> 9100/TCP 28h
monitoring prometheus-grafana ClusterIP 10.43.20.165 <none> 80/TCP 28h
monitoring prometheus-kube-prometheus-prometheus ClusterIP 10.43.207.29 <none> 9090/TCP 28h
monitoring prometheus-kube-state-metrics ClusterIP 10.43.229.14 <none> 8080/TCP 28h
kube-system prometheus-kube-prometheus-kubelet ClusterIP None <none> 10250/TCP,10255/TCP,4194/TCP 28h
monitoring alertmanager-operated ClusterIP None <none> 9093/TCP,9094/TCP,9094/UDP 28h
monitoring prometheus-operated ClusterIP None <none> 9090/TCP 28h
kube-system traefik LoadBalancer 10.43.20.17 192.168.76.200,192.168.76.201,192.168.76.202,192.168.76.203 80:31131/TCP,443:31562/TCP 5d21h
Namespaces:
$ kubectl get namespaces
NAME STATUS AGE
kube-system Active 5d21h
default Active 5d21h
kube-public Active 5d21h
kube-node-lease Active 5d21h
monitoring Active 28h
But I couldn't reach the grafana service.
Fair enough I thought, let's define an Ingress but it didn't work:
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: grafana-ingress
namespace: monitoring
annotations:
kubernetes.io/ingress.class: traefik
spec:
rules:
- http:
paths:
- pathType: Prefix
path: /
backend:
service:
name: prometheus-grafana
port:
number: 80
I have no idea why it isn't getting to the service and I can't really see where the problem is, although I understand containers, etc. (I first had everything running on docker swarm), I don't really know where, if anywhere, it would be shown in the logs.
I've spent the past couple of days trying all sorts of things and I finally found a hint about name spaces and problems calling services and something called "type: ExternalName".
I checked with curl from a pod inside the cluster and it is delivering the data inside of the "monitoring" name space but traefik can't get there or maybe even see it?
Having looked at the Traefik documentation I found this regarding namespaces but I have no idea where I would start to find the mentioned:
providers:
kubernetesCRD:
namespaces:
I'm assuming that k3s has set this up correctly as an empty array because I can't find anything on their site that tells me what to do with their combination of "klipper-lb" and "traefik".
I finally tried to define another service with an external name:
---
apiVersion: v1
kind: Service
metadata:
name: grafana-named
namespace: kube-system
spec:
type: ExternalName
externalName: prometheus-grafana.monitoring.svc.cluster.local
ports:
- name: service
protocol: TCP
port: 80
targetPort: 80
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: grafana-ingress
namespace: kube-system
annotations:
kubernetes.io/ingress.class: traefik
spec:
rules:
- http:
paths:
- pathType: Prefix
path: /
backend:
service:
name: grafana-named
port:
number: 80
After 2-3 days, I've tried everything I can think of, google everything under the sun and I can't get to grafana from outside of the internal cluster nodes.
I am at a loss as to how I can make anything work with k3s. I installed Lens on my main PC and can see almost everything there, but I think that the missing metrics information requires an Ingress or something like that too.
What do I have to do to get traefik to do what I think is basically it's job, route incoming requests to the backend services?
I filed a bug report on github and one of the people there (thanks again brandond) pointed me in the right direction.
The network layer uses flannel to process the "in cluster" networking. The default implementation for that is something called "vxlan" and that is seemingly more complex with virtual ethernet adapters.
For my requirements (read: getting the cluster to even work), the solution was to change the implementation to "host-gw".
This is done by adding "--flannel-backend=host-gw" to the k3s.service option on the controller.
$ sudo systemctl edit k3s.service
### Editing /etc/systemd/system/k3s.service.d/override.conf
### Anything between here and the comment below will become the new contents of the file
[Service]
ExecStart=
ExecStart=/usr/local/bin/k3s \
server \
'--flannel-backend=host-gw'
### Lines below this comment will be discarded
The first "ExecStart=" clears the existing default start command to enable it to be replaced by the 2nd one.
Now everything is working as I expected, and I can finally move forward with learning K8s.
I'll probably reactivate "vxlan" at some point and figure that out too.

What is the correct prometheus URL to be used by prometheus-adapter

I have successfully deployed
prometheus via helm chart kube-prometheus-stack (https://prometheus-community.github.io/helm-charts)
prometheus-adapter via helm chart prometheus-adapter (https://prometheus-community.github.io/helm-charts)
using default configuration with slight customization.
I can access prometheus, grafana and alertmanager, query metrics and see fancy charts.
But prometheus-adapter keeps complaining on startup that it can't access/discover metrics:
I0326 08:16:52.266095 1 adapter.go:98] successfully using in-cluster auth
I0326 08:16:52.330094 1 dynamic_serving_content.go:111] Loaded a new cert/key pair for "serving-cert::/var/run/serving-cert/tls.crt::/var/run/serving-cert/tls.key"
E0326 08:16:52.334710 1 provider.go:227] unable to update list of all metrics: unable to fetch metrics for query "{namespace!=\"\",__name__!~\"^container_.*\"}": bad_response: unknown response code 404
I've tried various prometheus URLs in the prometheus-adapter Deployment command line argument but the problem is more or less the same.
E.g. some of the URLs I've tried are
--prometheus-url=http://prometheus-operated.prom.svc:9090
--prometheus-url=http://prometheus-kube-prometheus-prometheus.prom.svc.cluster.local:9090
There are the following services / pods running:
$ kubectl -n prom get pods
NAME READY STATUS RESTARTS AGE
alertmanager-prometheus-kube-prometheus-alertmanager-0 2/2 Running 0 16h
prometheus-adapter-76fcc79b7b-7xvrm 1/1 Running 0 10m
prometheus-grafana-559b79b564-bh85n 2/2 Running 0 16h
prometheus-kube-prometheus-operator-8556f58759-kl84l 1/1 Running 0 16h
prometheus-kube-state-metrics-6bfcd6f648-ms459 1/1 Running 0 16h
prometheus-prometheus-kube-prometheus-prometheus-0 2/2 Running 1 16h
prometheus-prometheus-node-exporter-2x6mt 1/1 Running 0 16h
prometheus-prometheus-node-exporter-bns9n 1/1 Running 0 16h
prometheus-prometheus-node-exporter-sbcjb 1/1 Running 0 16h
$ kubectl -n prom get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
alertmanager-operated ClusterIP None <none> 9093/TCP,9094/TCP,9094/UDP 16h
prometheus-adapter ClusterIP 10.0.144.45 <none> 443/TCP 16h
prometheus-grafana ClusterIP 10.0.94.160 <none> 80/TCP 16h
prometheus-kube-prometheus-alertmanager ClusterIP 10.0.0.135 <none> 9093/TCP 16h
prometheus-kube-prometheus-operator ClusterIP 10.0.170.205 <none> 443/TCP 16h
prometheus-kube-prometheus-prometheus ClusterIP 10.0.250.223 <none> 9090/TCP 16h
prometheus-kube-state-metrics ClusterIP 10.0.135.215 <none> 8080/TCP 16h
prometheus-operated ClusterIP None <none> 9090/TCP 16h
prometheus-prometheus-node-exporter ClusterIP 10.0.70.247 <none> 9100/TCP 16h
kubectl -n kube-system get deployment/metrics-server
NAME READY UP-TO-DATE AVAILABLE AGE
metrics-server 1/1 1 1 15d
Prometheus-adapter helm chart gets deployed using the following values:
prometheus:
url: http://prometheus-kube-prometheus-prometheus.prom.svc.cluster.local
certManager:
enabled: true
What is the correct value for --prometheus-url for prometheus-adapter in my setup ?
The problem is related to the additional path used to expose prometheus via Ingress.
I was using additional path prefix /monitoring/prometheus/ for my ingress configuration.
The solution is to tell prometheus-adapter too, that prometheus is accessible incl. this path prefix.
Thus the following makes prometheus-adapter happy :
--prometheus-url=http://prometheus-kube-prometheus-prometheus.prom.svc.cluster.local:9090/monitoring/prometheus/
And now I can see custom metrics when executing
kubectl get --raw /apis/custom.metrics.k8s.io/v1beta1
Thank you "rock'n rolla" for giving some hints !
I'm using both helm charts (kube-prometheus-stack and prometheus-adapter).
additional path prefix that works for me is "/", but, prometheus url must be with the name of your helm-install parameter for stack ("helm install "). I'm using "prostack" as stack name. So finally, it works for me:
helm install <adapter-name> -n <namespace> --set prometheus.url=http://prostack-kube-prometheus-s-prometheus.monitoring.svc.cluster.local --set prometheus.port=9090 --set prometheus.path=/
I struggled with the same issue.
When i installed my prometheus server with Helm via the community chart.
I got a message like this:
The Prometheus server can be accessed via port 80 on the following DNS name from within your cluster:
my-tag-prometheus-server.carbon.svc.cluster.local
Please note that is says the service is accessible on port 80 and not 9090 which i had not noticed.
So in my values.yaml file for the prometheus-adapter chart i specified port 80 and it worked (instead of the standard 9090)
28 # Url to access prometheus
29 prometheus:
30 # Value is templated
31 url: http://my-tag-prometheus-server.carbon.svc.cluster.local
32 port: 80
33 path: ""

Hashicorp Consul, Agent/Client access

I am trying to do Consul setup via Kubernetes, helm chart, https://www.consul.io/docs/k8s/helm
Based on my pre-Kubernetes knowledge: services, using Consul access via Consul Agent, running on each host and listening on hosts IP
Now, I deployed via Helm chart to Kubernetes cluster. First misunderstanding the terminology, Consul Agent vs Client in this setup? I presume it is the same
Now, set up:
Helm chart config (Terraform fragment), nothing specific to Clients/Agent's and their service:
global:
name: "consul"
datacenter: "${var.consul_config.datacenter}"
server:
storage: "${var.consul_config.storage}"
connect: false
syncCatalog:
enabled: true
default: true
k8sAllowNamespaces: ['*']
k8sDenyNamespaces: [${join(",", var.consul_config.k8sDenyNamespaces)}]
Pods, client/agent ones are DaemonSet, not in host network mode
kubectl get pods
NAME READY STATUS RESTARTS AGE
consul-8l587 1/1 Running 0 11h
consul-cfd8z 1/1 Running 0 11h
consul-server-0 1/1 Running 0 11h
consul-server-1 1/1 Running 0 11h
consul-server-2 1/1 Running 0 11h
consul-sync-catalog-8b688ff9b-klqrv 1/1 Running 0 11h
consul-vrmtp 1/1 Running 0 11h
Services
kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
consul ExternalName <none> consul.service.consul <none> 11h
consul-dns ClusterIP 172.20.124.238 <none> 53/TCP,53/UDP 11h
consul-server ClusterIP None <none> 8500/TCP,8301/TCP,8301/UDP,8302/TCP,8302/UDP,8300/TCP,8600/TCP,8600/UDP 11h
consul-ui ClusterIP 172.20.131.29 <none> 80/TCP 11h
Question 1 Where is a service, to target Client (Agent) pods, but not Server's pods ? Did I miss it in helm chart?
My plan is, while I am not going to use Host (Kubernetes node) networking:
Find the Client/Agent service or make my own. So, it will be used by the Consul's user's. E.g., this service address I will specify for Consul template init pod of the Consul template. In the config consuming application
kubectl get pods --selector app=consul,component=client,release=consul
consul-8l587 1/1 Running 0 11h
consul-cfd8z 1/1 Running 0 11h
consul-vrmtp 1/1 Running 0 11h
Optional: will add a topologyKeys in to agent service, so each consumer will not cross host boundary
Question 2 Is it right approach? Or it is different for Consul Kubernetes deployments
You can use the Kubernetes downward API to inject the IP of host as an environment variable for your pod.
apiVersion: v1
kind: Pod
metadata:
name: consul-example
spec:
containers:
- name: example
image: 'consul:latest'
env:
- name: HOST_IP
valueFrom:
fieldRef:
fieldPath: status.hostIP
command:
- '/bin/sh'
- '-ec'
- |
export CONSUL_HTTP_ADDR="${HOST_IP}:8500"
consul kv put hello world
restartPolicy: Never
See https://www.consul.io/docs/k8s/installation/install#accessing-the-consul-http-api for more info.

Introducing ingress to istio mesh

I had an Istio mesh with mtls disabled with following pods and services. I'm using kubeadm.
pasan#ubuntu:~$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
default debug-tools 2/2 Running 0 2h
default employee--debug-deployment-57947cf67-gwpjq 2/2 Running 0 2h
default employee--employee-deployment-5f4d7c9d78-sfmtx 2/2 Running 0 2h
default employee--gateway-deployment-bc646bd84-wnqwq 2/2 Running 0 2h
default employee--salary-deployment-d4969d6c8-lz7n7 2/2 Running 0 2h
default employee--sts-deployment-7bb9b44bf7-lthc8 1/1 Running 0 2h
default hr--debug-deployment-86575cffb6-6wrlf 2/2 Running 0 2h
default hr--gateway-deployment-8c488ff6-827pf 2/2 Running 0 2h
default hr--hr-deployment-596946948d-rzc7z 2/2 Running 0 2h
default hr--sts-deployment-694d7cff97-4nz29 1/1 Running 0 2h
default stock-options--debug-deployment-68b8fccb97-4znlc 2/2 Running 0 2h
default stock-options--gateway-deployment-64974b5fbb-rjrwq 2/2 Running 0 2h
default stock-options--stock-deployment-d5c9d4bc8-dqtrr 2/2 Running 0 2h
default stock-options--sts-deployment-66c4799599-xx9d4 1/1 Running 0 2h
pasan#ubuntu:~$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
employee--debug-service ClusterIP 10.104.23.141 <none> 80/TCP 2h
employee--employee-service ClusterIP 10.96.203.80 <none> 80/TCP 2h
employee--gateway-service ClusterIP 10.97.145.188 <none> 80/TCP 2h
employee--salary-service ClusterIP 10.110.167.162 <none> 80/TCP 2h
employee--sts-service ClusterIP 10.100.145.102 <none> 8080/TCP,8081/TCP 2h
hr--debug-service ClusterIP 10.103.81.158 <none> 80/TCP 2h
hr--gateway-service ClusterIP 10.106.183.101 <none> 80/TCP 2h
hr--hr-service ClusterIP 10.107.136.178 <none> 80/TCP 2h
hr--sts-service ClusterIP 10.105.184.100 <none> 8080/TCP,8081/TCP 2h
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 2h
stock-options--debug-service ClusterIP 10.111.51.88 <none> 80/TCP 2h
stock-options--gateway-service ClusterIP 10.100.81.254 <none> 80/TCP 2h
stock-options--stock-service ClusterIP 10.96.189.100 <none> 80/TCP 2h
stock-options--sts-service ClusterIP 10.108.59.68 <none> 8080/TCP,8081/TCP 2h
I accessed this service using a debug pod using the following command:
curl -X GET http://hr--gateway-service.default:80/info -H "Authorization: Bearer $token" -v
As the next step, I enabled mtls in the mesh. As expected the above curl command failed.
Now I want to set up an ingress controller so I can access the service mesh as I did before.
So I set up Gateway and VirtualService as below:
cat <<EOF | kubectl apply -f -
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: hr-ingress-gateway
spec:
selector:
istio: ingressgateway # use Istio default gateway implementation
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "hr--gateway-service.default"
EOF
cat <<EOF | kubectl apply -f -
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: hr-ingress-virtual-service
spec:
hosts:
- "*"
gateways:
- hr-ingress-gateway
http:
- match:
- uri:
prefix: /info/
route:
- destination:
port:
number: 80
host: hr--gateway-service
EOF
But still I'm getting the following output
wso2carbon#gateway-5bd88fd679-l8jn5:~$ curl -X GET http://hr--gateway-service.default:80/info -H "Authorization: Bearer $token" -v
Note: Unnecessary use of -X or --request, GET is already inferred.
* Trying 10.106.183.101...
* Connected to hr--gateway-service.default (10.106.183.101) port 80 (#0)
> GET /info HTTP/1.1
> Host: hr--gateway-service.default
> User-Agent: curl/7.47.0
> Accept: */*
...
* Recv failure: Connection reset by peer
* Closing connection 0
curl: (56) Recv failure: Connection reset by peer
Can you please let me know if my ingress set up is correct and how I can access the service using curl after the set up.
My Ingress services are listed as below:
ingress-nginx default-http-backend ClusterIP 10.105.46.168 <none> 80TCP 3h
ingress-nginx ingress-nginx NodePort 10.110.75.131 172.17.17.100 80:30770/TCP,443:32478/TCP
istio-ingressgateway NodePort 10.98.243.205 <none> 80:31380/TCP,443:31390/TCP,31400:31400/TCP,15011:31775/TCP,8060:32436/TCP,853:31351/TCP,15030:32149/TCP,15031:32653/TCP 3h
#Pasan to apply Istio CRDs (VirtualServices) to incoming traffic you need to use Istio's Ingress Gateway as a point of ingress as seen here: https://istio.io/docs/tasks/traffic-management/ingress/
The ingressgateway is a wrapper around the envoy which is configurable using Istio's CRDs.
Basically, you don't need a second ingress controller and during istio installation, the default one is installed, find out by executing:
kubectl get services -n istio-system -l app=istio-ingressgateway
and with the Ingress Gateway ip execute:
curl -X GET http://{INGRESSGATEWAY_IP}/info -H "Authorization: Bearer $token" -H "Host: hr--gateway-service.default"
I added the host as a header as it is defined in the Gateway meaning that only for this host ingress is allowed.

Accessing Kubernetes dashboard on Compute instance in Oracle Cloud

I have deployed kubernetes and the dashboard onto a compute instance in Oracle cloud.
I have the dashboard installed with grafana onto my compute instance.
NAME READY STATUS RESTARTS AGE
po/etcd-mst-instance1 1/1 Running 0 1h
po/heapster-7856f6b566-rkfx5 1/1 Running 0 1h
po/kube-apiserver-mst-instance1 1/1 Running 0 1h
po/kube-controller-manager-mst-instance1 1/1 Running 0 1h
po/kube-dns-d879d6bcb-b9zjf 3/3 Running 0 1h
po/kube-flannel-ds-lgklw 1/1 Running 0 1h
po/kube-proxy-g6vxm 1/1 Running 0 1h
po/kube-scheduler-mst-instance1 1/1 Running 0 1h
po/kubernetes-dashboard-dd5c889c-6vphq 1/1 Running 0 1h
po/monitoring-grafana-5d4d76cd65-p7n5l 1/1 Running 0 1h
po/monitoring-influxdb-787479f6fd-8qkg2 1/1 Running 0 1h
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
svc/heapster ClusterIP 10.98.200.184 <none> 80/TCP 1h
svc/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 1h
svc/kubernetes-dashboard ClusterIP 10.107.155.3 <none> 443/TCP 1h
svc/monitoring-grafana ClusterIP 10.96.130.226 <none> 80/TCP 1h
svc/monitoring-influxdb ClusterIP 10.105.163.213 <none> 8086/TCP 1h
I am trying to access the dashboard via SSH and did the below in my local computer:
ssh -L localhost:8001:172.31.4.117:6443 opc#xxxxxxxx
However, it tells me this error :
Permission denied (publickey,gssapi-keyex,gssapi-with-mic).
Im not sure what is the best way to access the dashboard. I am new at k8s and still at a beginner stage so would want to consult as I have also tried doing kubectl proxy on my local computer but when i try to access 127.0.0.1 it gives me this error:
I0804 17:01:28.902675 77193 logs.go:41] http: proxy error: dial tcp [::1]:8080: connect: connection refused
Would really appreciaate any help and thank you
Kubernetes includes a web dashboard that can be used for basic management operations.
Once Dashboard is installed on your Kubernetes cluster, it can be accessed in a few different ways.
I prefer to use the kubectl proxy from the command line to access Kubernetes Dashboard.
Kubectl does for you: authentication with API server and forward traffic between
your cluster (with Dashboard deployed inside) and your web browser.
Please notice that kubectl does it for a local running web browser, as it is running on
a localhost.
From the command line:
kubectl proxy
Next, start browsing this address:
http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/
In case Kubernetes API server is exposed and accessible, you may try:
https://<master-ip>:<apiserver-port>/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/
where master-ip is the IP address of your Kubernetes master node where API is running.
On single node setup, another way is use NodePort configuration to access Dashboard.
I found it on dashboard wiki:
Here is a sample of configuration to consider and adapt to your needs:
apiVersion: v1
...
name: kubernetes-dashboard
namespace: kube-system
resourceVersion: "343478"
selfLink: /api/v1/namespaces/kube-system/services/kubernetes-dashboard-head
spec:
clusterIP: <your-cluster-ip>
externalTrafficPolicy: Cluster
ports:
- port: 443
protocol: TCP
targetPort: 8443
selector:
k8s-app: kubernetes-dashboard
sessionAffinity: None
type: NodePort
After applying configuration, check for the exposed port for https using the command:
kubectl -n kube-system get service kubernetes-dashboard
If it returned for example 31707, you could start your browser with:
https://<master-ip>:31707
I was inspired by web ui dashboard guide and accessing dashboard wiki.