Azure defender is showing vulnerabilities in the Nginx ingress image
ingress images are in ACR.
I did update the helm repo but it's still showing same issue
I am happy to provide more information if needed
As a first step I would try this:
https://kubernetes.io/blog/2022/04/28/ingress-nginx-1-2-0/#skip-the-talk-what-do-i-need-to-use-this-new-approach
Second, you can read this to figure out what is the best solution for you.
https://support.f5.com/csp/article/K01051452
You can also take a look here for security issues:
https://github.com/kubernetes/ingress-nginx/issues/8372
Related
Greetings Stack Overflow!
I am attempting to upgrade my Kubernetes ingress from beta1v1 to v1; it looks like this (with Terraform):
From: ingress already present
resource "kubernetes_ingress" "stuff" {}
To: proposed upgrade
resource "kubernetes_ingress_v1" "stuff" {}
Terraform, and I believe Kubernetes/GKE is throwing an error that the resource stuff is already percent, so it can't create it. I've tried renaming the resource, which didn't work, so I think the error is that Kubernetes can't have two ingresses with the same meta name, and Terraform isn't negotiating this.
What's weird is that we had some vanilla k8s deployments that underwent this refactoring, and it worked like a champ!
Some people in Kubernetes Slack, and Reddit have recommended manually deleting the ingress, which should drop the load balancer... However, I mentioned that vanilla k8s deployments could accomplish this without manual intervention. I would really like Terraform to do it's magic, and not have to manually intervene, extend downtime, etc, for this API upgrade, or future API upgrades.
My question is this - natively with Terraform, what's the more automatic way to get Terraform to upgrade the API?
I'm afraid of manually deleting the ingress or modifying the YAML from GKE console, or forcing a Terraform import. As we have other microservices in the same namespace, so I would like to tread lightly here.
Any ideas?
I am installing nginx ingress controller through helm chart and pods are not coming up. Got some issue with the permission.
Chart link - https://artifacthub.io/packages/helm/ingress-nginx/ingress-nginx
I am using latest version 4.2.1
I done debugging as stated here https://github.com/kubernetes/ingress-nginx/issues/4061
also tried to run as root user runAsUser: 0
I think i got this issue after cluster upgrade from 1.19 to 1.22. Previously it was working fine.
Any suggestion what i need to do to fix that?
unexpected error storing fake SSL Cert: could not create PEM
certificate file
/etc/ingress-controller/ssl/default-fake-certificate.pem: open
/etc/ingress-controller/ssl/default-fake-certificate.pem: permission
denied
You obviously have permission problem. Looking at the Chart you specified, the are multiple values of runAsUser for different config.
controller.image.runAsUser: 101
controller.admissionWebhooks.patch.runAsUser: 2000
defaultBackend.image.runAsUser: 65534
I'm not sure why these are different, but if possible -
Try to delete your existing chart, and fresh install it.
If the issue still persist - check the deployment / pod events, see if the cluster alerts you about something.
Also worth noting, there were breaking changes in 1.22 to Ingress resource.
Check this and this links from the official release notes.
That issue occurred because all worker node is not properly upgraded as a result ingress controller cant setup properly so i tried to install it on particular node having same version as cluster then it works properly
I am using istio.1.6 and i was trying to store metrics from istio prometheus to external prometheus based on istio best practise doc.But in the first step, I have to edit my configuration and add recording rules.I tried to edit the configmap of istio prometheus and added the recording rules.Edit is successful but when i try to see the rules in prometheus dashboard ,they donot appear(which i believe means the config didnot apply).I also tried to just delete the pod and see if the new pod has new configurations but still the problem.
What am i doing wrong? Any suggestions and answers is appreciated.
The problem was that the way I added the recording rules.I added rules in rules.yaml but forgot to mention it in rule_files field of the prometheus config file.I didn't know how to do prometheus configuration and that was the problem.
I also refered this github example
Also check out this post on prometheus federation
I am facing difficulty in working with jaeger and Istio.
Can anyone please describe the steps that are to be followed in configuring jaeger and istio for any demo application. I have tried a few blogs and sites but unfortunately, nothing worked for me. if anyone could help me in this that would be great.
I hope you have followed the official documentation of the jager with istio.
If you are using the helm chart make the following changes required.
In main values.yaml file
tracing:
enabled: true
In tracing/values.yaml
provider: jaeger
Export the dashboard via Kube port-forward or ingress.
Official Documentation.
https://istio.io/docs/tasks/telemetry/distributed-tracing/jaeger/
NOTE: The important thing by default jaeger will trace something like 0.1% request i.e. 1 request out of 100 so put a lot of requests only then you can see a trace in UI.
I'm setting up Spinnaker in K8s with aws-ecr. My setup and steps are:
on AWS side:
Added policies ecr-pull, ecr-push, and ecr-generate-token
Attached the policy to a role
Spinnaker setup:
Modified values.yaml with below above settings:
```accounts:
name: my-ecr
address: https://123456xxx.dkr.ecr.my-region.amazonaws.com
repositories:
123456xxx.dkr.ecr..amazonaws.com/spinnaker-test-project
```
Annotated clouddriver.yaml: deployment to use created role (using the IAM role in a pod by referencing the role name in an annotation on the pod specification)
But it doesn't work and the error on the cloudrvier side is :
.d.r.p.a.DockerRegistryImageCachingAgent : Could not load tags for 1234xxxxx.dkr.ecr.<my_region>.amazonaws.com/spinnaker-test-project in https://1234xxxxx.dkr.ecr.<my_region>.amazonaws.com
Would like to get some help or advice what I'm missing, thank you
Got the answer from an official Spinnaker slack channel. That adding an iam policy to the clouddriver pod won't work unfortunately since it uses the docker client instead of the aws client. The workaround to make it work can be found here
Note* Ecr support currently is broken in halyard.This might get fixed in future after halyard migrates from the kubernetes v1 -> v2 or earlier so please verify with community or docs.