Azure defender is showing vulnerabilities in the Nginx ingress image
ingress images are in ACR.
I did update the helm repo but it's still showing same issue
I am happy to provide more information if needed
As a first step I would try this:
https://kubernetes.io/blog/2022/04/28/ingress-nginx-1-2-0/#skip-the-talk-what-do-i-need-to-use-this-new-approach
Second, you can read this to figure out what is the best solution for you.
https://support.f5.com/csp/article/K01051452
You can also take a look here for security issues:
https://github.com/kubernetes/ingress-nginx/issues/8372
Does anyone know how to configure Promtail to watch and tail custom log paths in a Kubernetes pod? I have a deployment that creates customized log files in a directory like so /var/log/myapp. I found some documentation here that says to deploy Promtail as a sidecar to the container you want to collect logs from. I was hoping someone could explain how this method works in practice. Does it need to be done as a sidecar or could it be done as a Daemonset? Or if you have alternative solution that has proven to work could please show me an example.
Posting comment as the community wiki answer for better visibility:
Below information is taken from README.md from the GitHun repo provided by atlee19:
This docs assume:
you have loki and grafana already deployed. Please refered to official documentation for installation
The logfile you want to scrape is in JSON format
This Helm chart deploy a application pod with 2 containers: - a Golang app writing logs in a separate file. - a Promtail that read that log file and send it to loki.
The file path can be updated via the ./helm/values.yaml file.
sidecar.labels is a map where you can add the labels that will be added to your log entry in Loki.
Example:
Logfile located at /home/slog/creator.log
Adding labels
job: promtail-sidecar
test: golang
sidecar:
logfile:
path: /home/slog
filename: creator.log
labels:
job: promtail-sidecar
test: golang
I am using istio.1.6 and i was trying to store metrics from istio prometheus to external prometheus based on istio best practise doc.But in the first step, I have to edit my configuration and add recording rules.I tried to edit the configmap of istio prometheus and added the recording rules.Edit is successful but when i try to see the rules in prometheus dashboard ,they donot appear(which i believe means the config didnot apply).I also tried to just delete the pod and see if the new pod has new configurations but still the problem.
What am i doing wrong? Any suggestions and answers is appreciated.
The problem was that the way I added the recording rules.I added rules in rules.yaml but forgot to mention it in rule_files field of the prometheus config file.I didn't know how to do prometheus configuration and that was the problem.
I also refered this github example
Also check out this post on prometheus federation
I'm running kubernetes v1.7.10, and Istio 1.0.4, and also, kiali v0.9, which bundled with istio 1.0.4.
Then I deployed bookinfo example into a namespace, gateway points to the bookinfo service, and accessed the productpage homepage from a browser, all were fine.
But, from Graph view of kiali, just a diamon icon with label "unknown" displayed. what's wrong with it?
I can see all the services, workloads, istio configs on the Kiali, just no topology.
At last, I traced back to the prometheus metrics, something like:
istio_requests_total{connection_security_policy="none",destination_app="unknown",destination_principal="unknown",destination_service="details.test.svc.cluster.local",destination_service_name="details",destination_service_namespace="test",destination_version="unknown",destination_workload="unknown",destination_workload_namespace="unknown",instance="172.22.178.111:42422",job="istio-mesh",reporter="destination",request_protocol="http",response_code="200",source_app="unknown",source_principal="unknown",source_version="unknown",source_workload="unknown",source_workload_namespace="unknown"}
I noticed that they were all "unknown", destination_app, destination_version, source_app, source_version ... , I believe that's why no topo displayed.
And the metrics from http://istio-telemetry:42422/metrics:
istio_requests_total{connection_security_policy="none",destination_app="unknown",destination_principal="unknown",destination_service="details.test.svc.cluster.local",destination_service_name="details",destination_service_namespace="test",destination_version="unknown",destination_workload="unknown",destination_workload_namespace="unknown",reporter="destination",request_protocol="http",response_code="200",source_app="unknown",source_principal="unknown",source_version="unknown",source_workload="unknown",source_workload_namespace="unknown"} 32
Then I did another testing, I setup a cluster of kubernetes v1.10.3, and installed istio 1.0.4 into it, deployed bookinfo examples, everything is fine, with a beatiful topo graph.
So, I doubt is there anyting different between the kubernetes versions that break the graph view of kiali?
Can someone give any hints?
thanks.
likun
I can't find a clear information on Istio website, but I believe kubernetes below 1.9 isn't supported. This is kind of suggested in the install page for Minikube: https://istio.io/docs/setup/kubernetes/platform-setup/minikube/
Maybe you can try with an older version of Istio, but I wouldn't guarantee it's going to work either. You would also have to pick up an older version of Kiali.
Kiali builds its graph from labels in istio telemetry, so you're right to correlate with what you see in Prometheus. In particular, source_app, source_workload, destination_app and destination_workload are used by Kiali to detect graph relations.
What have I done wrong?
I installed Istio last week on GKE and, when following the instructions step-by-step, everything appeared to work correctly including all the Grafana dashboards.
This week I attempted to recreate the configuration to share with my team. Everything appears to work correctly except the per-service (e.g. productpage) dashboards that report "no datapoints".
I did delete and recreate some resources out of order and perhaps this explains my error?
I would appreciate a heuristic that could help me diagnose where I've gone wrong and how to address. My largest area of non-familiarity is with Prometheus. Clearly Grafana is connected to Prometheus. What could I check in Prometheus to ensure it's configured correctly?
Perhaps I should simply delete and recreate but, I'd like to learn from this experience.
istioctl version:
Version: 0.1.5
GitRevision: 21f4cb4
GitBranch: master
User: jenkins#ubuntu-16-04-build-de3bbfab70500
GolangVersion: go1.8.1
KubeInjectHub: docker.io/istio
KubeInjectTag: 0.1
apiserver version:
Version: 0.1.5
GitRevision: 21f4cb4
GitBranch: master
User: jenkins#ubuntu-16-04-build-de3bbfab70500
GolangVersion: go1.8.1
When we've seen this before, it is typically fixed by just refreshing the page in the browser. The metrics powering the summary dashboards are the same ones that are used to power the service graphs.
Can you try refreshing the page and seeing what happens?