Expose specific endpoint with multiple ingress application - kubernetes-helm

I have an app that already uses an internal ingress, but I want to expose a single endpoint to the internet. How can I do that?
My chart:
apis:
- name: api
image:
repositoryURI: URL
containerPort: 80
workload: general
service:
enabled: true
port: 80
ingress:
- enabled: true
type: internal-ms
hosts:
- hostname: example.qa
- hostname: qa.example.local
annotations:
nginx.ingress.kubernetes.io/proxy-connect-timeout: "300"
nginx.ingress.kubernetes.io/proxy-read-timeout: "300"
nginx.ingress.kubernetes.io/proxy-send-timeout: "300"
nginx.ingress.kubernetes.io/proxy-body-size: 50m
nginx.org/client-max-body-size: 50m
hpa:
...

Related

How to access Alertmanager, Grafana and Prometheus URLs or DNS records with ingress in the kube_prometheus_stack helm chart values.yaml file?

In the kube_prometheus_stack helm chart values.yaml file, I made the following changes to access the DNS records of Alertmanager, Grafana and Prometheus with ingress. Since we are currently using haproxy ingress for other services on EKS Kubernetes, I used haproxy ingress.
For Alertmanager:
alertmanager:
enabled: true
serviceAccount:
create: true
ingress:
enabled: true
annotations:
ingressClassName: haproxy
cert-manager.io/cluster-issuer: "letsencrypt"
haproxy.ingress.kubernetes.io/use-regex: "true"
haproxy.ingress.kubernetes.io/rewrite-target: /$1
labels:
app.kubernetes.io/instance: cert-manager
hosts:
- alertmanager.monitoring.new.io
tls:
- secretName: alertmanager-tls
hosts:
- alertmanager.monitoring.new.io
service:
port: 443
targetPort: 9093
For Grafana:
grafana:
enabled: true
service:
port: 443
targetPort: 3000
ingress:
enabled: true
annotations:
cert-manager.io/cluster-issuer: "letsencrypt"
kubernetes.io/ingress.class: haproxy
haproxy.ingress.kubernetes.io/use-regex: "true"
haproxy.ingress.kubernetes.io/rewrite-target: /$1
labels:
app.kubernetes.io/instance: cert-manager
hosts:
- grafana.monitoring.new.io
tls:
- secretName: test-grafana-tls
hosts:
- grafana.monitoring.new.io
For Prometheus:
prometheus:
enabled: true
serviceAccount:
create: true
ingress:
enabled: true
annotations:
ingressClassName: haproxy
cert-manager.io/cluster-issuer: "letsencrypt"
haproxy.ingress.kubernetes.io/use-regex: "true"
haproxy.ingress.kubernetes.io/rewrite-target: /$1
labels:
app.kubernetes.io/instance: cert-manager
hosts:
- prometheus.monitoring.new.io
tls:
- secretName: prometheus-tls
hosts:
- prometheus.monitoring.new.io
service:
port: 443
targetPort: 9090
I also defined DNS records on AWS Route-53 and made the necessary settings. But I can only access the Grafana WEb interface from the Grafana URL. When I try to access the Alertmanager and Prometheus web interfaces, I get a "default backend - 404" error.
Although I did a lot of research, I could not reach a clear conclusion. I wonder if I need to try a different method or where am I going wrong?

Prometheus Operator not scraping colocated etcd metrics

I have a K8s cluster with colocated etcd deployed on-prem servers, using Kubespray. I don't see the etcd metrics getting scraped by Prometheus operator. Prometheus operator deployed using helm v3.5.4.
K8s version 1.22 , Helm chart prometheus-community/kube-prometheus-stack version 25.0.0 , 3 node control plane on CentOS 7.
Prometheus config shows a job for etcd - job_name: serviceMonitor/monitoring/kube-prometheus-kube-prome-kube-etcd/0 .
But there is no service for etcd in the list of Services for Prometheus.
There are no endpoints defined for etcd
Values.yml (updated with volumes ) for helm deployment
prometheus:
service:
type: NodePort
externalTrafficPolicy: Local
ingress:
enabled: true
annotations:
kubernetes.io/ingress.class: "custom"
hosts:
- prometheus.{{ cluster_domain }}.mydomain.com
paths:
- /
pathType: Prefix
tls:
- secretName:
prometheusSpec:
storageSpec:
volumeClaimTemplate:
spec:
accessModes: ["ReadWriteOnce"]
storageClassName: rook-ceph-block
resources:
requests:
storage: {{ monitoring.storage_size }}
volumeMounts:
- name: cert-vol
mountPath: "/etc/prometheus/secrets/etcd-certs"
readOnly: true
volumes:
- name: cert-vol
secret:
secretName: etcd-certs
kubeEtcd:
enabled: true
endpoints:
- 172.1.1.1
- 172.1.1.2
- 172.1.1.3
service:
port: 2379
targetPort: 2379
serviceMonitor:
scheme: https
insecureSkipVerify: true
caFile: /etc/prometheus/secrets/etcd-certs/ca.crt
certFile: /etc/prometheus/secrets/etcd-certs/client.crt
keyFile: /etc/prometheus/secrets/etcd-certs/client.key
I added the endpoints to kubeEtcd section to get it to work. The updated values.yaml is like below (changed IP adresses):
prometheus:
service:
type: NodePort
externalTrafficPolicy: Local
ingress:
enabled: true
annotations:
kubernetes.io/ingress.class: "custom"
hosts:
- prometheus.{{ cluster_domain }}.mydomain.com
paths:
- /
pathType: Prefix
tls:
- secretName:
prometheusSpec:
storageSpec:
volumeClaimTemplate:
spec:
accessModes: ["ReadWriteOnce"]
storageClassName: rook-ceph-block
resources:
requests:
storage: {{ monitoring.storage_size }}
volumeMounts:
- name: cert-vol
mountPath: "/etc/prometheus/secrets/etcd-certs"
readOnly: true
volumes:
- name: cert-vol
secret:
secretName: etcd-certs
kubeEtcd:
enabled: true
endpoints:
- 172.1.1.1
- 172.1.1.2
- 172.1.1.3
service:
port: 2379
targetPort: 2379
serviceMonitor:
scheme: https
insecureSkipVerify: true
caFile: /etc/prometheus/secrets/etcd-certs/ca.crt
certFile: /etc/prometheus/secrets/etcd-certs/client.crt
keyFile: /etc/prometheus/secrets/etcd-certs/client.key

Zonal network endpoint group unhealthy even though that container application working properly

I've created a Kubernetes cluster on Google Cloud and even though the application is running properly (which I've checked running requests inside the cluster) it seems that the NEG health check is not working properly. Any ideas on the cause?
I've tried to change the service from NodePort to LoadBalancer, different ways of adding annotations to the service. I was thinking that perhaps it might be related to the https requirement in the django side.
# [START kubernetes_deployment]
apiVersion: apps/v1
kind: Deployment
metadata:
name: moner-app
labels:
app: moner-app
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: moner-app
template:
metadata:
labels:
app: moner-app
spec:
containers:
- name: moner-core-container
image: my-template
imagePullPolicy: Always
resources:
requests:
memory: "128Mi"
limits:
memory: "512Mi"
startupProbe:
httpGet:
path: /ht/
port: 5000
httpHeaders:
- name: "X-Forwarded-Proto"
value: "https"
failureThreshold: 30
timeoutSeconds: 10
periodSeconds: 10
initialDelaySeconds: 90
readinessProbe:
initialDelaySeconds: 120
httpGet:
path: "/ht/"
port: 5000
httpHeaders:
- name: "X-Forwarded-Proto"
value: "https"
periodSeconds: 10
failureThreshold: 3
timeoutSeconds: 10
livenessProbe:
initialDelaySeconds: 30
failureThreshold: 3
periodSeconds: 30
timeoutSeconds: 10
httpGet:
path: "/ht/"
port: 5000
httpHeaders:
- name: "X-Forwarded-Proto"
value: "https"
volumeMounts:
- name: cloudstorage-credentials
mountPath: /secrets/cloudstorage
readOnly: true
env:
# [START_secrets]
- name: THIS_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: GRACEFUL_TIMEOUT
value: '120'
- name: GUNICORN_HARD_TIMEOUT
value: '90'
- name: DJANGO_ALLOWED_HOSTS
value: '*,$(THIS_POD_IP),0.0.0.0'
ports:
- containerPort: 5000
args: ["/start"]
# [START proxy_container]
- image: gcr.io/cloudsql-docker/gce-proxy:1.16
name: cloudsql-proxy
command: ["/cloud_sql_proxy", "--dir=/cloudsql",
"-instances=moner-dev:us-east1:core-db=tcp:5432",
"-credential_file=/secrets/cloudsql/credentials.json"]
resources:
requests:
memory: "64Mi"
limits:
memory: "128Mi"
volumeMounts:
- name: cloudsql-oauth-credentials
mountPath: /secrets/cloudsql
readOnly: true
- name: ssl-certs
mountPath: /etc/ssl/certs
- name: cloudsql
mountPath: /cloudsql
# [END proxy_container]
# [START volumes]
volumes:
- name: cloudsql-oauth-credentials
secret:
secretName: cloudsql-oauth-credentials
- name: ssl-certs
hostPath:
path: /etc/ssl/certs
- name: cloudsql
emptyDir: {}
- name: cloudstorage-credentials
secret:
secretName: cloudstorage-credentials
# [END volumes]
# [END kubernetes_deployment]
---
# [START service]
apiVersion: v1
kind: Service
metadata:
name: moner-svc
annotations:
cloud.google.com/neg: '{"ingress": true, "exposed_ports": {"5000":{}}}' # Creates an NEG after an Ingress is created
cloud.google.com/backend-config: '{"default": "moner-backendconfig"}'
labels:
app: moner-svc
spec:
type: NodePort
ports:
- name: moner-core-http
port: 5000
protocol: TCP
targetPort: 5000
selector:
app: moner-app
# [END service]
---
# [START certificates_setup]
apiVersion: networking.gke.io/v1
kind: ManagedCertificate
metadata:
name: managed-cert
spec:
domains:
- domain.com
- app.domain.com
# [END certificates_setup]
---
apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
name: moner-backendconfig
spec:
customRequestHeaders:
headers:
- "X-Forwarded-Proto:https"
healthCheck:
checkIntervalSec: 15
port: 5000
type: HTTP
requestPath: /ht/
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: managed-cert-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: moner-ssl
networking.gke.io/managed-certificates: managed-cert
kubernetes.io/ingress.class: "gce"
spec:
defaultBackend:
service:
name: moner-svc
port:
name: moner-core-http
Apparently, you didn’t have a GCP firewall rule to allow traffic on port 5000 to your GKE nodes. Creating an ingress firewall rule with IP range - 0.0.0.0/0 and port - TCP 5000 targeted to your GKE nodes could allow your setup to work even with port 5000.
I'm still not sure why, but i've managed to work when moved the service to port 80 and kept the health check on 5000.
Service config:
# [START service]
apiVersion: v1
kind: Service
metadata:
name: moner-svc
annotations:
cloud.google.com/neg: '{"ingress": true, "exposed_ports": {"5000":{}}}' # Creates an NEG after an Ingress is created
cloud.google.com/backend-config: '{"default": "moner-backendconfig"}'
labels:
app: moner-svc
spec:
type: NodePort
ports:
- name: moner-core-http
port: 80
protocol: TCP
targetPort: 5000
selector:
app: moner-app
# [END service]
Backend config:
apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
name: moner-backendconfig
spec:
customRequestHeaders:
headers:
- "X-Forwarded-Proto:https"
healthCheck:
checkIntervalSec: 15
port: 5000
type: HTTP
requestPath: /ht/

Istio Manifest install 1.9

apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
metadata:
namespace: istio-system
name: istio
spec:
profile: default
values:
gateways:
istio-ingressgateway:
#sds:
enabled: true
components:
base:
enabled: true
ingressGateways:
- name: istio-ingressgateway
enabled: true
namespace: test-ingress
k8s:
overlays:
- apiVersion: v1
kind: Gateway
name: micro-ingress
patches:
- path: spec.servers
value:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*" #host should be specify in VirtualService
tls:
httpsRedirect: true
- port:
number: 443
name: https
protocol: HTTPS
hosts:
- "*"
tls:
mode: SIMPLE
credentialName: secret
privateKey: sds
serverCertificate: sds
serviceAnnotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
service:
ports:
- name: status-port
port: 15020
targetPort: 15020
- name: http2
port: 80
targetPort: 8080
- name: https
port: 443
targetPort: 8443
loadBalancerIP: xx.xx.xx.xxx
resources:
limits:
cpu: 2000m
memory: 1024Mi
requests:
cpu: 100m
memory: 128Mi
istioctl manifest apply -f .\manifest.yml
This will install the Istio 1.9.0 default profile with ["Istio core" "Istiod" "Ingress gateways"] components into the cluster.
Proceed? (y/N) y
Error: failed to install manifests: errors occurred during operation: overlay for Gateway:micro-ingress does not match any object in output
manifest. Available objects are:
HorizontalPodAutoscaler:test-ingress:istio-ingressgateway
Deployment:test-ingress:istio-ingressgateway
PodDisruptionBudget:test-ingress:istio-ingressgateway
Role:test-ingress:istio-ingressgateway-sds
RoleBinding:test-ingress:istio-ingressgateway-sds
Service:test-ingress:istio-ingressgateway
ServiceAccount:test-ingress:istio-ingressgateway-service-account

Get http://<master-ip>:<nodeport>/metrics: context deadline exceeded

I made the Kubernetes cluster with 2 azure Ubuntu VMs and trying to monitor the cluster. For that, I have deployed node-exporter daemonSet, heapster, Prometheus and grafana. Configured the node-exporter as a target in Prometheus rules files. but I am getting Get http://master-ip:30002/metrics: context deadline exceeded error. I have also increased scrape_interval and scrape_timeout values in the Prometheus-rules file.
The following are the manifest files for the Prometheus-rules file and node-exporter daemonSet and service files.
apiVersion: apps/v1
kind: DaemonSet
metadata:
labels:
app: node-exporter
name: node-exporter
namespace: kube-system
spec:
selector:
matchLabels:
app: node-exporter
template:
metadata:
labels:
app: node-exporter
spec:
containers:
- args:
- --web.listen-address=<master-IP>:30002
- --path.procfs=/host/proc
- --path.sysfs=/host/sys
- --path.rootfs=/host/root
- --collector.filesystem.ignored-mount-points=^/(dev|proc|sys|var/lib/docker/.+)($|/)
- --collector.filesystem.ignored-fs-types=^(autofs|binfmt_misc|cgroup|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|mqueue|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|sysfs|tracefs)$
image: quay.io/prometheus/node-exporter:v0.18.1
name: node-exporter
resources:
limits:
cpu: 250m
memory: 180Mi
requests:
cpu: 102m
memory: 180Mi
volumeMounts:
- mountPath: /host/proc
name: proc
readOnly: false
- mountPath: /host/sys
name: sys
readOnly: false
- mountPath: /host/root
mountPropagation: HostToContainer
name: root
readOnly: true
- args:
- --logtostderr
- --secure-listen-address=[$(IP)]:9100
- --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256
- --upstream=http://<master-IP>:30002/
env:
- name: IP
valueFrom:
fieldRef:
fieldPath: status.podIP
image: quay.io/coreos/kube-rbac-proxy:v0.4.1
name: kube-rbac-proxy
ports:
- containerPort: 9100
hostPort: 9100
name: https
resources:
limits:
cpu: 20m
memory: 40Mi
requests:
cpu: 10m
memory: 20Mi
hostNetwork: true
hostPID: true
nodeSelector:
kubernetes.io/os: linux
securityContext:
runAsNonRoot: true
runAsUser: 65534
serviceAccountName: node-exporter
tolerations:
- operator: Exists
volumes:
- hostPath:
path: /proc
name: proc
- hostPath:
path: /sys
name: sys
- hostPath:
path: /
name: root
---
apiVersion: v1
kind: Service
metadata:
labels:
k8s-app: node-exporter
name: node-exporter
namespace: kube-system
spec:
type: NodePort
ports:
- name: https
port: 9100
targetPort: https
nodePort: 30002
selector:
app: node-exporter
---prometheus-config-map.yaml-----
apiVersion: v1
kind: ConfigMap
metadata:
name: prometheus-server-conf
labels:
name: prometheus-server-conf
namespace: default
data:
prometheus.yml: |-
global:
scrape_interval: 5m
evaluation_interval: 3m
scrape_configs:
- job_name: 'node'
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
static_configs:
- targets: ['<master-IP>:30002']
- job_name: 'kubernetes-apiservers'
kubernetes_sd_configs:
- role: endpoints
scheme: https
tls_config:
ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
relabel_configs:
- source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
action: keep
regex: default;kubernetes;https
Can we take service as NodePort for Node-exporter daemonSet? if the answer NO, how could we configure in a prometheus-rules file as the target? Could anyone help me to understand the scenario? Are any suggestible links also fine?
As #gayahtri confirmed in comments
it worked for me. – gayathri
If you have same issue as mentioned in topic check out this github issue
specifically this answer added by #simonpasquier
We have debugged it offline and the problem was the network.
Running the Prometheus container with "--network=host" solved the issue.