Loki behind https ingress configuration with helm - kubernetes

Is there any way to configure promtail to send logs to loki via https-ingress?
promtail ---> https-ingress ---> loki
I used this helm chart promtail and configured loki url as http://gateway.loki.monitoring.example.com:80/loki/api/v1/push. After I deploy promtail chart I see below errors in promtail pod
level=error ts=2022-03-28T14:10:23.740581978Z caller=client.go:360 component=client host=gateway.loki.monitoring.example.com:80 msg="f
inal error sending batch" status=308 error="server returned HTTP status 308 Permanent Redirect (308): <html>"
I even specified https in loki url as https://gateway.loki.monitoring.example.com:80/loki/api/v1/push but still failing
level=warn ts=2022-03-28T14:27:47.976570998Z caller=client.go:349 component=client host=gateway.loki.monitoring.example:80 msg="er
ror sending batch, will retry" status=-1 error="Post \"https://gateway.loki.monitoring.example.com:80/loki/api/v1/push\": http: server
gave HTTP response to HTTPS client"
I found this config https://grafana.com/docs/loki/latest/installation/helm/#run-loki-behind-https-ingress, but it is outdated
NOTE:
I have not configured any https at loki side.
Configured loki-distributed chart's ingress like below(and rest ingress config are default)
...
ingress:
# -- Specifies whether an ingress for the gateway should be created
enabled: true
# -- Ingress Class Name. MAY be required for Kubernetes versions >= 1.18
ingressClassName: monitoring-ingress
# -- Annotations for the gateway ingress
annotations:
cert-manager.io/cluster-issuer: monitoring-cluster-issuer
# -- Hosts configuration for the gateway ingress
hosts:
- host: gateway.loki.monitoring.example.com
paths:
- path: /
# -- pathType (e.g. ImplementationSpecific, Prefix, .. etc.) might also be required by some Ingress Controllers
pathType: Prefix
# -- TLS configuration for the gateway ingress
tls:
- secretName: loki-gateway-tls-certs
hosts:
- gateway.loki.monitoring.example.com
...
Did I miss any ingress config at loki?

After I played some time, I understood I need to remove port and specify https for the loki URL. Should be like below
https://gateway.loki.monitoring.example.com/loki/api/v1/push

Related

Kubernetes Nginx ingress appears to remove headers before sending requests to backend service

I am trying to deploy a SpringbBoot Java application hosted on Apache Tomcat on a Kubernetes cluster using Ngninx Ingress for URL routing. More specifically, I am deploying on Minikube on my local machine, exposing the SpringBoot application as a Cluster IP Service, and executing the
minikube tunnel
command to expose services to my local machine. Visually, the process is the following...
Browser -> Ingress -> Service -> Pod hosting docker container of Apache Tomcat Server housing SpringBoot Java API
The backend service requires a header "SM_USER", which for the moment can be any value. When running the backend application as a Docker container with port forwarding, accessing the backend API works great. However, when deploying to a Kubernetes cluster behind an Nginx ingress, I get 403 errors from the API stating that the SM_USER header is missing. I suspect the following. My guess is that the header is included with the request to the ingress, but removed when being routed to the backend service.
My setup is the following.
Deploying on Minikube
minikube start
minikube addons enable ingress
eval $(minikube docker-env)
docker build . -t api -f Extras/DockerfileAPI
docker build . -t ui -f Extras/DockerfileUI
kubectl apply -f Extras/deployments-api.yml
kubectl apply -f Extras/deployments-ui.yml
kubectl expose deployment ui --type=ClusterIP --port=8080
kubectl expose deployment api --type=ClusterIP --port=8080
kubectl apply -f Extras/ingress.yml
kubectl apply -f Extras/ingress-config.yml
edit /etc/hosts file to resolve mydomain.com to localhost
minikube tunnel
Ingress YAML
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
# Tried forcing the header by manual addition, no luck (tried with and without)
nginx.ingress.kubernetes.io/configuration-snippet: |
proxy_set_header SM_USER $http_Admin;
spec:
rules:
- host: mydomain.com
http:
paths:
- path: /ui
pathType: Prefix
backend:
service:
name: ui
port:
number: 8080
- path: /api
pathType: Prefix
backend:
service:
name: api
port:
number: 8080
Ingress Config YAML
kind: ConfigMap
apiVersion: v1
metadata:
name: nginx-config
namespace: default
labels:
app.kubernetes.io/name: my-ingress
app.kubernetes.io/part-of: my-ingress
data:
# Tried allowing underscores and also using proxy protocol (tried with and without)
enable-underscores-in-headers: "true"
use-proxy-protocol: "true"
When navigating to mydomain.com/api, I expect to receive the ROOT API interface but instead receive the 403 error page indicating that the SM_USER is missing. Note, this is not a 403 forbidden error regarding the ingress or access from outside the cluster, the specific SpringBoot error page I receive is from within my application and custom to indicate that the header is missing. In other words, my routing is definitely correct, and I am able to access the API, its just that the header is missing.
Are there configs or parameters I am missing? Possibly an annotation?
This is resolved. The issue was that the config was being applied in the Application namespace. Note, even if the Ingress object is in the application namespace, if you are using minikubes built in ingress functionality, the ConfigMap must be applied in the Nginx ingress namespace.

ingress nginx how to debug 502 page even though the ports in service and Ingress are correct?

i have a web application running inside cluster ip on worker node on port 5001,i'm also using k3s for cluster deployment, i checked the cluster connection it's running fine
the deployment has the container port set to 5001:
ports:
- containerPort:5001
Here is the service file:
apiVersion: v1
kind: Service
metadata:
labels:
io.kompose.service: user-ms
name: user-ms
spec:
ports:
- name: http
port: 80
targetPort: 5001
selector:
io.kompose.service: user-ms
status:
loadBalancer: {}
and here is the ingress file:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: user-ms-ingress
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: user-ms
port:
number: 80
i'm getting 502 Bad Gateway error whenever i type in my worker or master ip address
expected: it should return the web application page
i looked online and most of them mention wrong port for service and ingress, but my ports are correct yes i triple check it:
try calling user-ms service on port 80 from another pod -> worked try
calling cluster ip on worker node on port 5001 -> worked
the ports are running correct, why is the ingress returning 502?
here is the ingress describe:
and here is the describe of nginx ingress controller pod:
the nginx ingress pod running normally:
here is the logs of the nginx ingress pod:
sorry for the images, but i'm using a streaming machine to access the terminal so i can't copy paste
How should i go with debugging this error?
ok i managed to figure out this, in the default setting of K3S it uses traefik as it default ingress, so that why my nginx ingress log doesn't show anything from 502 Bad GateWay
I decided to tear down my cluster and set it up again, now with suggestion from this issue https://github.com/k3s-io/k3s/issues/1160#issuecomment-1058846505 to create cluster without traefik:
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="--disable=traefik" sh -
now when i call kubectl get pods --all-namespaces i no longer see traefik pod running, previously it had traefik pods runining.
once i done all of it, run apply on ingress once again -> get 404 error, i checked in the nginx ingress pod logs now it's showing new error of missing Ingress class, i add the following to my ingress configuration file under metadata:
metadata:
name: user-ms-ingress
annotitations:
kubernetes.io/ingress.class: "nginx"
now i once more go to the ip of the worker node -> 404 error gone but got 502 bad gateway error, i checked the logs get connection refused errors:
i figured out that i was setting a network policy for all of my micro services, i delete the network policy and remove it's setting from all my deployment files.
Finally check once more and i can see that i can access my api and swagger page normally.
TLDR:
If you are using nginx ingress on K3S, remember to disable traefik first when created the cluster
don't forget to set ingress class inside your ingress configuration
don't setup network policy because nginx pod won't be able to call the other pods in that network
You can turn on access logging on nginx, which will enable you to see more logs on ingress-controller and also trace every requests routing through ingress, if you are trying to load UI/etc, it will show you that the requests are coming in from browser or if you accessing a particular endpoint, the calls will be visible on the nginx-controller logs. You can conclude, if the requests coming in are actually routing to the proper service using this and then start debugging the service (ex: check to see if you can curl the endpoint from any pod within the cluster etc)
Noticed that you are using the image(k8s.gcr.io/ingress-nginx/controller:v1.2.0), if you have installed using helm, there must be a kubernetes-ingress configmap with ingress controller, by default "disable-access-log" will be true, change it false and you should start seeing more logs on ingress-controller, you might want to bounce ingress controller pods if you do not see detailed logs.
Kubectl edit cm -n namespace kubernetes-ingress
apiVersion: v1
data:
disable-access-log: "false" #turn this to false
map-hash-bucket-size: "128"
ssl-protocols: SSLv2 SSLv3 TLSv1 TLSv1.1 TLSv1.2 TLSv1.3
kind: ConfigMap

Traefik behind ssl terminating load balancer return 404

I have a K8s setup with traefik being exposed like this
kubernetes:
ingressClass: traefik
service:
nodePorts:
http: 32080
serviceType: NodePort
Behind, I forward some requests to different services
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-name
annotations:
kubernetes.io/ingress.class: traefik
spec:
rules:
- host: my-host.com
http:
paths:
- path: /my-first-path
backend:
serviceName: my-nodeJs-services
servicePort: 3000
When the DNS is set directly to resolve to my ip, the application works fine with HTTP
http://my-host.com:32080/my-first-path
But when some one add SSL through AWS ALB / API Gateway, the application fail to be reached with 404-NotFound error
The route is like this
https://my-host.com/my-first-path
On the AWS size, they configured something like this
https://my-host.com => SSL Termination and => Forward all to 43.43.43.43:32080
I think this fail because traefik is expecting http://my-host.com but not https://my-host.com which lead to its failure to find the matching route? Or maybe at the ssl termination time, the hostname is lost so that traefik can not find a route?
What should I do in this situation?
I am not very familiar with ALB but what is probably happening is that the requests received by the loadbalancer contain the header Host: my-host.com and when it gets forwarded to your ingress controller, the header is replaced by Host: 43.43.43.43. If this is the case, I see 3 solutions:
ALB might be able to pass the original Host header to the target. (You will have to check in the doc if it's possible)
If the application behind your ingress doesn't check the host header, you can write an ingress that doesn't check a specific host. For example on these examples you can see that the host field is not specified.
If the name resolution works internally, you can define a name for your target, use this name in your ALB and in your ingress.

Nginx Ingress returns 502 Bad Gateway on Kubernetes

I have a Kubernetes cluster deployed on AWS (EKS). I deployed the cluster using the “eksctl” command line tool. I’m trying to deploy a Dash python app on the cluster without success. The default port for Dash is 8050. For the deployment I used the following resources:
pod
service (ClusterIP type)
ingress
You can check the resource configuration files below:
pod-configuration-file.yml
kind: Pod
apiVersion: v1
metadata:
name: dashboard-app
labels:
app: dashboard
spec:
containers:
- name: dashboard
image: my_image_from_ecr
ports:
- containerPort: 8050
service-configuration-file.yml
kind: Service
apiVersion: v1
metadata:
name: dashboard-service
spec:
selector:
app: dashboard
ports:
- port: 8050 # exposed port
targetPort: 8050
ingress-configuration-file.yml (host based routing)
kind: Ingress
metadata:
name: dashboard-ingress
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/force-ssl-redirect: "false"
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: dashboard.my_domain
http:
paths:
- backend:
serviceName: dashboard-service
servicePort: 8050
path: /
I followed the steps below:
kubectl apply -f pod-configuration-file.yml
kubectl apply -f service-configuration-file.yml
kubectl apply -f ingress-confguration-file.yml
I also noticed that the pod deployment works as expected:
kubectl logs my_pod:
and the output is:
Dash is running on http://127.0.0.1:8050/
Warning: This is a development server. Do not use app.run_server
in production, use a production WSGI server like gunicorn instead.
* Serving Flask app "annotation_analysis" (lazy loading)
* Environment: production
WARNING: This is a development server. Do not use it in a production deployment.
Use a production WSGI server instead.
* Debug mode: on
You can see from the ingress configuration file that I want to do host based routing using my domain. For this to work, I have also deployed an nginx-ingress. I have also created an “A” record set using Route53
that maps the “dashboard.my_domain” to the nginx-ingress:
kubectl get ingress
and the output is:
NAME HOSTS ADDRESS. PORTS. AGE
dashboard-ingress dashboard.my_domain nginx-ingress.elb.aws-region.amazonaws.com 80 93s
Moreover,
kubectl describe ingress dashboard-ingress
and the output is:
Name: dashboard-ingress
Namespace: default
Address: nginx-ingress.elb.aws-region.amazonaws.com
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
Host Path Backends
---- ---- --------
host.my-domain
/ dashboard-service:8050 (192.168.36.42:8050)
Annotations:
nginx.ingress.kubernetes.io/force-ssl-redirect: false
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/ssl-redirect: false
Events: <none>
Unfortunately, when I try to access the Dash app on the browser, I get a
502 Bad Gateway error from the nginx. Could you please help me because my Kubernetes knowledge is limited.
Thanks in advance.
It had nothing to do with Kubernetes or AWS settings. I had to change my python Dash code from:
if __name__ == "__main__":
app.run_server(debug=True)
to:
if __name__ == "__main__":
app.run_server(host='0.0.0.0',debug=True).
The addition of host='0.0.0.0' did the trick!
I think you'll need to check whether any other service is exposed at path / on the same host.
Secondly, try removing rewrite-target annotation. Also can you please update your question with output of kubectl describe ingress <ingress_Name>
I would also suggest you to use backend-protocol annotation with value as HTTP (default value, you can avoid using this if dashboard application is not SSL Configured, and only this application is served at the said host.) But, you may need to add this if multiple applications are served at this host, and create one Ingress with backend-protocol: HTTP for non SSL services, and another with backend-protocol: HTTPS to serve traffic to SSL enabled services.
For more information on backend-protocol annotation, kindly refer this link.
I have often faced this issue in my Ingress Setup and these steps have helped me resolve it.

OpenShift Service Proxy timeout

I have an application deployed on OpenShift Container Platform v3.6. It consists of multiple services interconnected to each other.
The frontend service calls a time consuming function of the backend service (through a REST call), but after 30 seconds it receives a "504 Gateway Timeout" message. Frontend runs over nginx, but I've already configured it with long proxy send/read timeouts, so the 504 message doesn't come from it. I think it comes from the Service Proxy component of OpenShift Platform, but I can't find out where and how configure a kind of service proxy timeout. I know the existence of HAProxy timeout for external routes, but my services leave in the same cluster application and communicate each other via OpenShift Container Platform DNS.
Could be a Service Proxy timeout issue? How can it be configured?
Thanks!
Your route timeout is the culprit. The haproxy ingress router is terminating the request. You can configure the timeout by following the docs below:
https://docs.openshift.com/container-platform/3.5/install_config/configuring_routing.html
For example:
# Set the timeout on 'longrunningroute' to five minutes.
oc annotate route longrunningroute --overwrite haproxy.router.openshift.io/timeout=5m
In my case I didn't annotate the route myself but added the annotation to the Ingress.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example
namespace: my-namespace
annotations:
haproxy.router.openshift.io/timeout: 600s
spec:
tls:
- hosts:
- example.com
secretName: https-tls-secret
rules:
- host: example.com
http:
paths:
- path: /testpath
pathType: Prefix
backend:
service:
name: test
port:
number: 80
The routes are managed by the ingress and therefore inherit the annotations from it.