Grafana with kubeflow - grafana

I am trying to integrate Grafana with my kubeflow in order to monitor my model.
I have no clue from where to start as I am not able to find anything in the documentation.
Can someone help?

To run Grafana with kubeflow, follow the steps:
create namespace
kubectl create namespace knative-monitoring
setup monitoring components
kubectl apply --filename
https://github.com/knative/serving/releases/download/v0.13.0/monitoring-metrics-prometheus.yaml
Launch grafana board via port forwarding
kubectl port-forward --namespace knative-monitoring $(kubectl get pod
--namespace knative-monitoring --selector="app=grafana" --output jsonpath='{.items[0].metadata.name}') 8080:3000
Access the grafana dashboard on http://localhost:8080.

It depends on your configuration. I had a MiniKF instance running on an EC2 VM and needed to specify the address was 0.0.0.0 for the port-forwarding method to work.
kubectl port-forward --namespace knative-monitoring \
$(kubectl get pod --namespace knative-monitoring \
--selector="app=grafana" --output jsonpath='{.items[0].metadata.name}') \
--address 0.0.0.0 8080:3000
Then you should be able to access the grafana dashboard at http://{your-kf-ip}:8080

You can also expose it via istio, using this virtualservice:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: grafana-vs
namespace: kubeflow
spec:
gateways:
- kubeflow-gateway
hosts:
- '*'
http:
- match:
- method:
regex: GET|POST
uri:
prefix: /istio/grafana/
rewrite:
uri: /
route:
- destination:
host: grafana.istio-system.svc.cluster.local
port:
number: 3000
So if you're visiting your kubeflow dashboard usually via https://kubeflow.example.com, having this exposed through kubeflow-gateway will allow you to access it via https://kubeflow.example.com/istio/grafana/
If you're not using Istio's grafana but Knative's, you can change the destination accordingly.
You might also need to change the root url of grafana via an env variable in grafana's deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: grafana
namespace: istio-system
spec:
template:
containers:
- env:
- name: GF_SERVER_ROOT_URL
value: https://kubeflow.example.com/istio/grafana

Related

How to make an app accessable over the internet using Ingress or metalLB on bare metal

I'm running a k8s cluter with one control and one worker node on bare metal ubuntu machines (IPs: 123.223.149.27 and 22.36.211.68).
I deployed a sample app:
kubectl create deployment nginx --image=nginx
kubectl expose deploy nginx --port 80 --target-port 80 --type NodePort
Running kubectl get services shows me:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 5d23h
nginx NodePort 10.100.107.184 <none> 80:30799/TCP 5h48m
and I can access this appllication inside of the cluster by
kubectl run alpine --image=alpine --restart=Never --rm -it -- wget -O- 10.100.107.184:80
But now I want to access the sample app outside of the cluster in the internet via http://123.223.149.27 or later within the domain mywebsite.com as the DNS of the domain is pointing to 123.223.149.27.
I applied:
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.12.1/manifests/namespace.yaml
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.12.1/manifests/metallb.yaml
with this config map:
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: production-public-ips
protocol: layer2
addresses:
- 123.223.149.27/32
- 22.36.211.68/32
and this ingress:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.3.1/deploy/static/provider/cloud/deploy.yaml
For me it is not clear, if I have to use ingress (then I would use ingress-nginx) and metalLB and how to configure both. I read a lot of service types like loadBalancer and NodePorts, but I think I didn't understand the concept correctly. I even didn't understand if I have to use ingress-nginx OR metalLB OR both of them. I only understand that if I'm using type LoadBalancer I have to use a loadbalancer as I am on bare metal, so in that case I have to use metalLB.
It would be very helpful for my understanding, if someone could explain on this example app how to make this accessable over the internet.
Since, you have a running service inside your Kubernetes cluster, you can expose via an ingress-controller which is a reverse-proxy that routes traffics from outside to your dedicated service(s) inside the cluster,
We'll use for example ingress-nginx,See https://github.com/kubernetes/ingress-nginx
These are the requirements you'll need for reaching your service at mywebsite.com:
Have access to DNS records of your domain mywebsite.com
Install ingress-nginx in your Cluster, See https://github.com/kubernetes/ingress-nginx/tree/main/charts/ingress-nginx
Install it using helm
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm install ingress-nginx ingress-nginx/ingress-nginx \
--namespace $NAMESPACE \
--set controller.replicaCount=2 \
--set controller.nodeSelector."kubernetes\.io/os"=linux \
--set controller.admissionWebhooks.patch.nodeSelector."kubernetes\.io/os"=linux \
--set defaultBackend.nodeSelector."kubernetes\.io/os"=linux \
--version $VERSIONS
You can look for versions compatibles with your Kubernetes cluster version using:
helm search repo ingress-nginx/ingress-nginx --versions
When installation is well finished, you should see ingress-controller service that holds an $EXTERNAL-IP
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller LoadBalancer 10.0.XXX.XXX XX.XXX.XXX.XX 80:30578/TCP,443:31874/TCP 548d
Now that you have a running ingress controller, you need to create a ingress object that manages external access to your $Service,
See https://kubernetes.io/docs/concepts/services-networking/ingress/
For example:
#ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
#cert-manager.io/cluster-issuer: letsencrypt-prod
#nginx.ingress.kubernetes.io/ssl-redirect: 'true'
spec:
tls:
- hosts:
- mywebsite.com
#secretName: cert-wildcard # get it from certificate.yaml
rules:
- host: mywebsite.com
http:
paths:
- path: / #/?(.*) #/(.*)
pathType: Prefix
backend:
service:
name: $Service
port:
number: 80
ingressClassName: nginx
---
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
name: nginx
spec:
controller: k8s.io/ingress-nginx
As you saw in the ingress file, commented lines refere to the usage of an SSL certificate generated by cert-manager from LetsEncrypt, this can be achieved by another process which is described here https://cert-manager.io/docs/configuration/acme/,
it depends mainly on your cloud provider (Cloudflare, Azure, ...)
Finally, In your DNS zone, Add a DNS record which maps mywebsite.com to $EXTERNAL-IP, wait a few minutes and you should be able to access your service under mywebsite.com

No matches for kind"gateway" and "virtualservice"

I am using Docker Desktop version 3.6.0 which has Kubernetes 1.21.3.
I am following this tutorial to get started on Istio
https://istio.io/latest/docs/setup/getting-started/
Istio is properly installed as per the instructions.
Now whenever i try to apply the Istio configuration
by issuing the command kubectl apply -f samples/bookinfo/networking/bookinfo-gateway.yaml.
I get the following error
unable to recognize "samples/bookinfo/networking/bookinfo-gateway.yaml": no matches for kind "Gateway" in version "networking.istio.io/v1alpha3"
unable to recognize "samples/bookinfo/networking/bookinfo-gateway.yaml": no matches for kind "VirtualService" in version "networking.istio.io/v1alpha3"
I checked in internet and found that the Gateway and VirtualService resources are missing.
If i perform kubectl get crd i get no resources found
Content of bookinfo-gatway.yaml
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: bookinfo-gateway
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: bookinfo
spec:
hosts:
- "*"
gateways:
- bookinfo-gateway
http:
- match:
- uri:
exact: /productpage
- uri:
prefix: /static
- uri:
exact: /login
- uri:
exact: /logout
- uri:
prefix: /api/v1/products
route:
- destination:
host: productpage
port:
number: 9080
The CRDs for istio should be installed as part of the istioctl install process, I'd recommend re-running the install if you don't have them available.
>>> ~/.istioctl/bin/istioctl install --set profile=demo -y
✔ Istio core installed
✔ Istiod installed
✔ Egress gateways installed
✔ Ingress gateways installed
✔ Installation complete
kubectl get po -n istio-system should look like
>>> kubectl get po -n istio-system
NAME READY STATUS RESTARTS AGE
istio-egressgateway-7ddb45fcdf-ctnp5 1/1 Running 0 3m20s
istio-ingressgateway-f7cdcd7dc-zdqhg 1/1 Running 0 3m20s
istiod-788ff675dd-9p75l 1/1 Running 0 3m32s
Otherwise your initial install has gone wrong somewhere.
You can apply CRD to your cluster without using istioctl install from https://github.com/istio/istio/blob/master/manifests/charts/base/crds/crd-all.gen.yaml
with
kubectl apply -f ./crd-all.gen.yaml

Make rabbitmq cluster publicly accesible

I am using this helm chart to configure rabbitmq on k8s cluster:
https://github.com/helm/charts/tree/master/stable/rabbitmq
How can I make cluster accessible thru public endpoint? Currently, I have a cluster with below configurations. I am able to access the management portal by given hostname (publicly endpoint, which is fine). But, when I checked inside the management portal cluster can be accessible by internal IP and/or hostname which is: rabbit#rabbitmq-0.rabbitmq-headless.default.svc.cluster.local and rabbit#<private_ip>. I want to make cluster public so all other services which are outside of VNET can connect to it.
helm install stable/rabbitmq --name rabbitmq \
--set rabbitmq.username=xxx \
--set rabbitmq.password=xxx \
--set rabbitmq.erlangCookie=secretcookie \
--set rbacEnabled=true \
--set ingress.enabled=true \
--set ingress.hostName=rabbitmq.xxx.com \
--set ingress.annotations."kubernetes\.io/ingress\.class"="nginx" \
--set resources.limits.memory="256Mi" \
--set resources.limits.cpu="100m"
I was not tried with Helm but I was build and deploy to Kubernetes directly from .yaml configure file. So I only followed the Template of Helm
For publish you RabbitMQ service out of cluster
1, You need to have an external IP:
If you using Google Cloud, run these commands:
gcloud compute addresses create rabbitmq-service-ip --region asia-southeast1
gcloud compute addresses describe rabbitmq-service-ip --region asia-southeast1
>address: 35.240.xxx.xxx
Change rabbitmq-service-ip to the name you want, and change the region to your own.
2, Configure Helm parameter
service.type=LoadBalancer
service.loadBalancerSourceRanges=35.240.xxx.xxx/32 # IP address you got from gcloud
service.port=5672
3, Deploy and try to telnet to your RabbitMQ service
telnet 35.240.xxx.xxx 5672
Trying 35.240.xxx.xxx...
Connected to 149.185.xxx.xxx.bc.googleusercontent.com.
Escape character is '^]'.
Gotcha! It's worked
FYI:
Here is base template if you want to create .yaml and deploy without Helm
service.yaml
---
apiVersion: v1
kind: Service
metadata:
name: rabbitmq
labels:
name: rabbitmq
namespace: smart-office
spec:
type: LoadBalancer
loadBalancerIP: 35.xxx.xxx.xx
ports:
# the port that this service should serve on
- port: 5672
name: rabbitmq
targetPort: 5672
nodePort: 32672
selector:
name: rabbitmq
deployment.yaml
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: rabbitmq
labels:
name: rabbitmq
namespace: smart-office
spec:
replicas: 1
template:
metadata:
labels:
name: rabbitmq
annotations:
prometheus.io/scrape: "false"
spec:
containers:
- name: rabbitmq
image: rabbitmq:3.6.8-management
ports:
- containerPort: 5672
name: rabbitmq
securityContext:
capabilities:
drop:
- all
add:
- CHOWN
- SETGID
- SETUID
- DAC_OVERRIDE
readOnlyRootFilesystem: true
- name: rabbitmq-exporter
image: kbudde/rabbitmq-exporter
ports:
- containerPort: 9090
name: exporter
nodeSelector:
beta.kubernetes.io/os: linux
Hope this help!
From your Helm values passed, I see that you have configured your RabbitMQ service with an Nginx Ingress.
You should create a DNS record with your ingress.hostName (rabbitmq.xxx.com) directed to the ingress IP (if GCP) or CNAME (if AWS) of your nginx-ingress load-balancer. That DNS hostname (rabbitmq.xx.com) is your public endpoint to access your RabbitMQ service.
Ensure that your nginx-ingress controller is running in your cluster in order for the ingresses to work. If you are unfamiliar with ingresses:
- Official Ingress Docs
- Nginx Ingress installation guide
- Nginx Ingress helm chart
Hope this helps!

Find why i am getting 502 Bad gateway error on kubernetes

I am using kubernetes. I have Ingress service which talks my container service. We have exposed a webapi which works all fine. But we keep getting 502 bad gateway error. I am new to kubernetes and i have no clue how to go about debugging this issue. Server is a nodejs server connected to database. Is there anything wrong with configuration?
My Deployment file--
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-pod
spec:
replicas: 1
template:
metadata:
labels:
app: my-pod
spec:
containers:
- name: my-pod
image: my-image
ports:
- name: "http"
containerPort: 8086
resources:
limits:
memory: 2048Mi
cpu: 1020m
---
apiVersion: v1
kind: Service
metadata:
name: my-pod-serv
spec:
ports:
- port: 80
targetPort: "http"
selector:
app: my-pod
My Ingress Service:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: gateway
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: abc.test.com
http:
paths:
- path: /abc
backend:
serviceName: my-pod-serv
servicePort: 80
In Your case:
I think that you get this 502 gateway error because you don't have Ingress controller configured correctly.
Please try do do it with installed Ingress like in example below. It will do all automatically.
Nginx Ingress step by step:
1) Install helm
2) Install nginx controller using helm
$ helm install stable/nginx-ingress --name nginx-ingress
It will create 2 services. You can get their details via
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.39.240.1 <none> 443/TCP 29d
nginx-ingress-controller LoadBalancer 10.39.243.140 35.X.X.15 80:32324/TCP,443:31425/TCP 19m
nginx-ingress-default-backend ClusterIP 10.39.252.175 <none> 80/TCP 19m
nginx-ingress-controller - in short, it's dealing with requests to Ingress and directing
nginx-ingress-default-backend - in short, default backend is a service which handles all URL paths and hosts the nginx controller doesn't understand
3) Create 2 deployments (or use yours)
$ kubectl run my-pod --image=nginx
deployment.apps/my-pod created
$ kubectl run nginx1 --image=nginx
deployment.apps/nginx1 created
4) Connect to one of the pods
$ kubectl exec -ti my-pod-675799d7b-95gph bash
And add additional line to the output to see which one we will try to connect later.
$ echo "HELLO THIS IS INGRESS TEST" >> /usr/share/nginx/html/index.html
$ exit
5) Expose deployments.
$ kubectl expose deploy nginx1 --port 80
service/nginx1 exposed
$ kubectl expose deploy my-pod --port 80
service/my-pod exposed
This will automatically create service and will looks like
apiVersion: v1
kind: Service
metadata:
labels:
app: my-pod
name: my-pod
selfLink: /api/v1/namespaces/default/services/my-pod
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: my-pod
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
6) Now its the time to create Ingress.yaml and deploy it. Each rule in ingress need to be specified. Here I have 2 services. Each service specification starts with -host under rule parameter.
Ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: two-svc-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: my.pod.svc
http:
paths:
- path: /pod
backend:
serviceName: my-pod
servicePort: 80
- host: nginx.test.svc
http:
paths:
- path: /abc
backend:
serviceName: nginx1
servicePort: 80
$ kubectl apply -f Ingress.yaml
ingress.extensions/two-svc-ingress created
7) You can check Ingress and hosts
$ kubectl get ingress
NAME HOSTS ADDRESS PORTS AGE
two-svc-ingress my.pod.svc,nginx.test.svc 35.228.230.6 80 57m
8) Eplanation why I installed Ingress.
Connect to the ingress controller pod
$ kubectl exec -ti nginx-ingress-controller-76bf4c745c-prp8h bash
www-data#nginx-ingress-controller-76bf4c745c-prp8h:/etc/nginx$ cat /etc/nginx/nginx.conf
Because I have installed nginx ingress earlier, after deploying Ingress.yaml, the nginx-ingress-controller found changes and automatically added necessary code.
In this file you should be able to find whole configuration for two services. I will not copy configuration but only headers.
start server my.pod.svc
start server nginx.test.svc
www-data#nginx-ingress-controller-76bf4c745c-prp8h:/etc/nginx$ exit
9) Test
$ kubectl get svc to get your nginx-ingress-controller external IP
$ curl -H "HOST: my.pod.svc" http://35.X.X.15/
default backend - 404
$ curl -H "HOST: my.pod.svc" http://35.X.X.15/pod
<!DOCTYPE html>
...
</html>
HELLO THIS IS INGRESS TEST
Please keep in mind Ingress needs to be in the same namespace like services. If you have a few services in many namespace you need to create Ingress for each namespace.
I would need to set up a cluster in order to test your yml files.
Just to help you debugging, follow this steps:
1- get the logs of the my-pod container using kubectl logs my-pod-container-name, make sure everything is working
2- Use port-forward to expose your container and test it.
3- Make sure the service is working properly, change its type to load balancer, so you can reach it from outside the cluster.
If the three things are working there is a problem with your ingress configuration.
I am not sure if I explained it in a detailed way, let me know if something is not clear

Istio Gateway and Traffic Routing does not work (deployed via Jenkins X/jx)

So we have a environment staging" repo which was created by jenkins x. In it we commit the following ymls to theenv/templates` folder. The kubernetes cluster is in AWS EKS.
apiVersion: v1
kind: Namespace
metadata:
name: global-gateway
---
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: app-gateway
namespace: global-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: app-hosts
namespace: jx-staging
spec:
hosts:
- "*"
gateways:
- app-gateway.global-gateway.svc.cluster.local
http:
- match:
- uri:
prefix: /
route:
- destination:
host: test-app
port:
number: 80
The above YMLs work perfectly, and I can access the service when applied via kubectl apply -f .
However instead of creating them manually we commit and push it to the repo which triggers a JX job which runs successfully. Afterwards we can see that all of the Gateway and VirtualService has been deployed correctly. i.e. if we run kubectl get Gateway we can see our gateway.
However the URL does not work and does not redirect to the microservice after been applied from jenkins.
The command that jenkins seems to run is
helm upgrade --namespace jx-staging --install --wait --force --timeout 600 --values values.yaml jx-staging .
To try and diagnose the problem I deployed using kubectl and jenkins and diffed the output of kubectl describe Gateway/VirtualService <name>
The jenkins/heml deployment showed Annotations: <none> while when deployed with kubectl it showed showed
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"networking.istio.
The Resource Version numbers were also different but I assume that is correct and alright?
EDIT:
The helm chart is as follows as well
description: GitOps Environment for this Environment
icon: https://www.cloudbees.com/sites/default/files/Jenkins_8.png
maintainers:
- name: Team
name: env
version: "39"
Please advice on how to get the istio gateway running with jx/helm.