Prometheus Operator: Can't access Prometheus instance - kubernetes

I am following the documentation to create services operator. I am not sure why I cannot access the Prometheus services.
My apps.yml:
kind: Service
apiVersion: v1
metadata:
name: sms-config-service
labels:
app: sms-config-service
spec:
type: NodePort
selector:
app: sms-config-service
ports:
- port: 8080
targetPort: 8080
name: http
My ServiceMonitor yml:
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
labels:
app: servicemonitor-sms-services
name: servicemonitor-sms-config-services
namespace: metrics
spec:
selector:
matchLabels:
app: sms-config-service
endpoints:
- port: http
Prometheus yml:
apiVersion: monitoring.coreos.com/v1
kind: Prometheus
metadata:
name: prometheus
spec:
serviceAccountName: prometheus
serviceMonitorSelector:
matchLabels:
app: servicemonitor-sms-services
resources:
requests:
memory: 800Mi
enableAdminAPI: true
Prometheus config yml:
apiVersion: v1
kind: Service
metadata:
name: prometheus
spec:
type: NodePort
ports:
- name: web
nodePort: 30900
port: 9090
protocol: TCP
targetPort: web
selector:
prometheus: prometheus
When I access the url below, browser shows "unable to connect". I am not sure where I did wrong? Should I set up a deployment for the Prometheus?
$ minikube service prometheus --url
http://192.168.64.3:30900
Update:
I have the prometheus pod running on NodePort 32676.
Should I change the Prometheus config yml to fix the issue?

I find the issue is I don't have serviceAccountName created.

Related

tunnel for service target port empty kubernetes and can't access pod from local browser

apiVersion: apps/v1
kind: Deployment
metadata:
name: identityold-deployment
spec:
selector:
matchLabels:
app: identityold
replicas: 1
template:
metadata:
labels:
app: identityold
spec:
containers:
- name: identityold
image: <image name from docker hub>
ports:
- containerPort: 8081
---
apiVersion: v1
kind: Service
metadata:
labels:
app: identityold
name: identityold-svc
namespace: default
spec:
type: NodePort # use LoadBalancer as type here
ports:
- port: 80
targetPort: 8081
nodePort: 30036
selector:
app: identityold
The above code is my deployment YAML file.
and cant access from the browser the service
Exposing a service in minikube cluster is little bit different than in normal kubernetes cluster.
Please follow this guide from kubernetes documentation and use minikube service command in order to expose it properly.

Service does not connect with Rollout

When I try to achieve Bule/Green Deployment using Argo Rollout, I can't figure out why it doesn't seem to connect to the Service and the Pod created using Rollouts.
We have service and Ingress(ALB Ingress Controller) as active and preview respectively, and we assign the selector to the one specified in Rollouts.
https://argoproj.github.io/argo-rollouts/features/bluegreen/
If you need anything else, please let me know.
apiVersion: argoproj.io/v1alpha1
kind: Rollout
metadata:
name: sample-app-a-deployment
spec:
selector:
matchLabels:
app: sample-app-a
template:
metadata:
labels:
app: sample-app-a
---
apiVersion: v1
kind: Service
metadata:
name: app-a-preview-service
spec:
type: NodePort
ports:
- port: 80
targetPort: 8080
protocol: TCP
selector:
app: sample-app-a
---
apiVersion: v1
kind: Service
metadata:
name: app-a-service
namespace: default
spec:
type: NodePort
ports:
- port: 80
targetPort: 8080
protocol: TCP
selector:
app: sample-app-a
I also use ArgoCD and this is what it looks like from the GUI.

Kubernetes nginx ingress controller throws error when trying to obtain endpoints for service

I am trying to set micro-services on Kubernetes on Google Cloud Platform. I've created a deployment, clusterIp and ingress configuration files.
First after creating a cluster, I run this command to install nginx ingress.
helm install my-nginx stable/nginx-ingress --set rbac.create=true
I use helm v3.
Then I apply deployment and clusterIp configurations.
deployment and clusterIp configurations:
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-production-deployment
spec:
replicas: 2
selector:
matchLabels:
component: app-production
template:
metadata:
labels:
component: app-production
spec:
containers:
- name: app-production
image: eu.gcr.io/my-project/app:1.0
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: app-production-cluser-ip-service
spec:
type: ClusterIP
selector:
component: app-production
ports:
- port: 80
targetPort: 80
protocol: TCP
My ingress config is:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-service
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: app.example.com
http:
paths:
- path: /
backend:
serviceName: app-production-cluster-ip-service
servicePort: 80
I get this error from Google Cloud Platform logs withing ingress controller:
Error obtaining Endpoints for Service "default/app-production-cluster-ip-service": no object matching key "default/app-production-cluster-ip-service" in local store
But when I do kubectl get endpoints command the output is this:
NAME ENDPOINTS AGE
app-production-cluser-ip-service 10.60.0.12:80,10.60.1.13:80 17m
I am really not sure what I'm doing wrong.
The service name mentioned in the ingress not matching. Please recreate a service and check
apiVersion: v1
kind: Service
metadata:
name: app-production-cluster-ip-service
spec:
type: ClusterIP
selector:
component: app-production
ports:
- port: 80
targetPort: 80
protocol: TCP

Kubernetes ingress-nginx gives 502 error (Bad Gateway)

I have an EKS cluster for which I want :
- 1 Load Balancer per cluster,
- Ingress rules to direct to the right namespace and the right service.
I have been following this guide : https://www.digitalocean.com/community/tutorials/how-to-set-up-an-nginx-ingress-with-cert-manager-on-digitalocean-kubernetes
My deployments:
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-world
namespace: default
spec:
replicas: 3
selector:
matchLabels:
app: hello-world
template:
metadata:
labels:
app: hello-world
spec:
containers:
- name: hello-world
image: IMAGENAME
ports:
- containerPort: 8000
name: hello-world
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: bleble
namespace: default
spec:
replicas: 3
selector:
matchLabels:
app: bleble
template:
metadata:
labels:
app: bleble
spec:
containers:
- name: bleble
image: IMAGENAME
ports:
- containerPort: 8000
name: bleble
the service of those deployments:
apiVersion: v1
kind: Service
metadata:
name: hello-world-svc
spec:
ports:
- port: 8080
protocol: TCP
targetPort: 8000
selector:
app: hello-world
type: NodePort
---
apiVersion: v1
kind: Service
metadata:
name: bleble-svc
spec:
ports:
- port: 8080
protocol: TCP
targetPort: 8000
selector:
app: bleble
type: NodePort
My Load balancer:
kind: Service
apiVersion: v1
metadata:
name: ingress-nginx
namespace: ingress-nginx
annotations:
service.beta.kubernetes.io/aws-load-balancer-internal: "true"
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
externalTrafficPolicy: Local
type: LoadBalancer
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
ports:
- name: http
port: 80
targetPort: http
My ingress:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: simple-fanout-example
namespace : default
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: internal-lb.aws.com
http:
paths:
- path: /bleble
backend:
serviceName: bleble-svc
servicePort: 80
- path: /hello-world
backend:
serviceName: hello-world-svc
servicePort: 80
I've set up the Nginx Ingress Controller with this : kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.24.1/deploy/mandatory.yaml
I am unsure why I get a 503 Service Temporarily Unavailable for one service and one 502 for another... I would guess it's a problem of ports or of namespace? In the guide, they don't define namespace for the deployment...
Every resources create correctly, and I think the ingress is actually working but is getting confused where to go.
Thanks for your help!
In general, use externalTrafficPolicy: Cluster instead of Local. You can gain some performance (latency) improvement by using Local but you need to configure those pod allocations with a lot efforts. You will hit 5xx errors with those misconfigurations. In addition, Cluster is the default option for externalTrafficPolicy.
In your ingress, you route /bleble to service bleble, but your service name is actually bleble-svc. please make them consistent. Also, you would need to set your servicePort to 8080 as you exposed 8080 in your service configuration.
For internal service like bleble-svc, Cluster IP is good enough in your case as it does not need external access.
Hope this helps.
Found it!
The containerPort in the Deployment were set to 8000, the targetport of the services as well, but the person who did the Dockerfile of the code exposed the port 80. Which was the reason it was getting the 502 Bad getaway!
Thanks a lot as well to #Fei who has been a fantastic helper!

How to expose nginx on public Ip using NodePort service in Kubernetes?

I'm executing kubectl create -f nginx.yaml which creates the pods successfully. But the PODS aren't exposed on Public IP of my instance. Following is the YAML used be me with service type as nodeport:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
name: nginx
spec:
type: NodePort
ports:
- port: 80
nodePort: 30080
name: http
- port: 443
nodePort: 30443
name: https
selector:
name: nginx
What could be in-correct in my approach or above YAML file to expose the pod on deployment to the public IP?
PS: Firewall and ACLs are open to internet on all TCP
The endpoint was not getting added. On debugging I found the label between deployment and Service has a mismatch. Hence changed the label type from "app" to "name" and it worked.
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
name: nginx
spec:
replicas: 3
selector:
matchLabels:
name: nginx
template:
metadata:
labels:
name: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
name: nginx
spec:
type: NodePort
ports:
- port: 80
nodePort: 30080
name: http
selector:
name: nginx
Jeel is right. Your Service selector is mismatch with Pod labels.
If you fix that like what Jeel already added in this answer
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
name: nginx
spec:
type: NodePort
ports:
- port: 80
nodePort: 30080
name: http
selector:
name: nginx
Your Service will be exposed in Node IP address. Because your Service Type is NodePort.
If your Node IP is, lets say, 35.226.16.207, you can connect to your Pod using this IP and NodePort
$ curl 35.226.16.207:30080
In this case, your node must have a public IP. Otherwise, you can't access
Second option, you can create LoadBalancer Type Service
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
name: nginx
spec:
type: LoadBalancer
ports:
- port: 80
name: http
selector:
name: nginx
This will provide you a public IP.
For more details, check this