Issue in Istio Integration with Ambassador API gateway - kubernetes

I have Installed Ambassador Api gateway on AWS EKS cluster. It's working as expected.
Now I'd like to integrate Istio service mesh.
I'm following the steps given in the ambassador's official documentation.
https://www.getambassador.io/docs/edge-stack/latest/howtos/istio/#istio-integration.
But after Istio integration some ambassador pods are keep crashing.
At a time only 1 pod shows healthy out of 3.
Note: Istio side car are integrated successfully in all ambassador pods. and I have tried with Ambassador 2.1.1 & 2.1.2. But both has same issue. I'm not able to keep all ambassador pod healthy.
My EKS version is v1.19.13-eks
Below are the error:
time="2022-03-02 12:30:17.0687" level=error msg="Post \"http://localhost:8500/_internal/v0/watt?url=http%3A%2F%2Flocalhost%3A9696%2Fsnapshot\": dial tcp 127.0.0.1:8500: connect: connection refused" func=github.com/datawire/ambassador/v2/cmd/entrypoint.notifyWebhookUrl file="/go/cmd/entrypoint/notify.go:124" CMD=entrypoint PID=1 THREAD=/watcher
Please do let me know if the above documentation is not sufficient for Istio integration with Ambassador on AWS EKS
Edit 1: In further investigation I found the issue comes when I tried to integrate Istio with PeerAuthentication STRICT mode. There is no such issue with default (permissive) mode.
But another issue comes when enable the STRICT mode, and now it's failing to connect with redis service

After some investigation and testing I find out the way to integrate Istio with Ambassador with PeerAuthentication STRICT mode.
the fix :
update the REDIS_URL env variable with https
from:
REDIS_URL: ambassador-redis:6379
to
REDIS_URL: https://ambassador-redis:6379

Related

gmp managed prometheus example not working on a brand new vanilla stable gke autopilot cluster

Google managed prometheus seems like a great service however at the moment it does not work even in the example... https://cloud.google.com/stackdriver/docs/managed-prometheus/setup-managed
Setup:
create a new autopilot cluster 1.21.12-gke.2200
enable manage prometheus via gcloud cli command
gcloud beta container clusters update <mycluster> --enable-managed-prometheus --region us-central1
add port 8443 firewall webhook command
install ingress-nginx
try and use the PodMonitoring manifest to get metrics from ingress-nginx
Error from server (InternalError): error when creating "ingress-nginx/metrics.yaml": Internal error occurred: failed calling webhook "default.podmonitorings.gmp-operator.gke-gmp-system.monitoring.googleapis.com": Post "https://gmp-operator.gke-gmp-system.svc:443/default/monitoring.googleapis.com/v1/podmonitorings?timeout=10s": x509: certificate is valid for gmp-operator, gmp-operator.gmp-system, gmp-operator.gmp-system.svc, not gmp-operator.gke-gmp-system.svc
There is a thread suggesting this will all be fixed this week (8/11/2022), https://github.com/GoogleCloudPlatform/prometheus-engine/issues/300, but it seems like this should work regardless.
if I try to port forward ...
kubectl -n gke-gmp-system port-forward svc/gmp-operator 8443
error: Pod 'gmp-operator-67d5fff8b9-p4n7t' does not have a named port 'webhook'

Set up nginx ingress controller on Kubernetes Cluster

I am unable to figure out how to set up ingress controller on Kubernetes cluster (not minikube). Every nginx ingress setup I followed yielded to an error and controller not set up properly. Basically, I want an equivalent command to minikube addons enable ingress.
Thanks.
Edit 1->
I am following the installation steps mentioned in https://kubernetes.github.io/ingress-nginx/deploy/
I have tried out the baremetal, cloud and a couple of more ways to install nginx-ingress controller.
In a couple of installations, External IP was stuck on for ever. In the cloud installation, while hosting the ingress service, I encountered the error,
Error from server (InternalError): error when creating "kubernetes-custom-scheduler/kubernetes/configuration/services/loki-ingress.yaml": Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": Post "https://ingress-nginx-controller-admission.ingress-nginx.svc:443/networking/v1beta1/ingresses?timeout=10s": dial tcp 10.98.61.194:443: connect: connection refused
I am currently using Debian GNU/Linux 10 (buster).
I have tried using bare-metal ingress controller from https://kubernetes.github.io/ingress-nginx/deploy/ but it is only for NodePort Service. I need the nginx-ingress controller for Cluster-IP Services.
The easiest way would be to install with HELM.
https://docs.nginx.com/nginx-ingress-controller/installation/installation-with-helm/
If you do not have helm, then install it first.
https://helm.sh/docs/intro/install/

Grafana running in Kubernetes Slack Webhook Error 502

I set up Grafana to run in GKE (Kubernetes) with a service and default Ingress controller to open it to the internet. Everything is working without any issues.
After creating some dashboards I wanted to setup Slack alerting using the slack webhook. After filling out all the details I received a 502 bad gateway error.
I have setup a second service to open port 443(Default slack webhook port) and exposed it with kubectl expose deployment --type=NodePort --port=443 and have also tried --type=LoadBalancer with no luck.
I've also tried setting up a second Ingress service pointing the second service, but then I run into readinessProbe issues.
Anyone had the same issue and if so how was it resolved?
Network Policy was enabled on the cluster and there were cluster policies denying outgoing traffic. After setting up my network policies for the pods in my own namespace I was able to connect without any issues.

How do we debug networking issues within istio pods?

I am working on setting up istio in my kubernetes cluster.
I downloaded istio-1.4.2 and installed demo profile and did manual sidecar injection.
But when I check sidecar pod logs, I am getting the below error.
2019-12-26T08:54:17.694727Z error k8s.io/client-go#v11.0.1-0.20190409021438-1a26190bd76a+incompatible/tools/cache/reflector.go:98: Failed to list *v1beta1.MutatingWebhookConfiguration: Get https://10.96.0.1:443/apis/admissionregistration.k8s.io/v1beta1/mutatingwebhookconfigurations?fieldSelector=metadata.name%3Distio-sidecar-injector&limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: connection refused
It seems to be the networking issue, but could you please let me know what it is trying to do exactly?
Is there a way to get more logs than just 'connection refused'?
How do we verify networking issues between istio pods. It seems I cannot run 'wget', 'curl', 'tcpdump', 'netstat' etc within istio sidecar pod to debug further.
All the pods in kube-system namespace are working fine.
Check what port your API Server is serving https traffic(controlled by this flag --secure-port int Default: 6443). It may be 6443 instead of 443.
Check what is the value of server in your kubeconfig and are you able to connect to your kubernetes via kubectl using that kubeconfig.
Another thing to check is whether you have network policy attached to the namespace which blocks egress traffic.
And you could use an ephemeral container to debug issue with the sidecar
https://kubernetes.io/docs/concepts/workloads/pods/ephemeral-containers/
https://github.com/aylei/kubectl-debug

Access non Istio resource

My current version of istio is 0.2.12.
I have a deployment that is deployed with istio kube-inject and tries to connect to a service/deployment inside of the kubernetes cluster that not uses Istio, how is it possible to allow access from the istio using deployment to the not istio using deployment.
In this case is the istio baked deployment a Spring boot application and the other is an ephemeral MySQL server.
Any ideas?
You should be able to access all the kubernetes services (Istio-injected and the regular Kubernetes ones) from Istio-injected pods.
This now possible, please see the
"Can I enable Istio Auth with some services while disable others in the same cluster?"
question in the security section of the faq: https://istio.io/help/faq.html