Istio outbound https traffic troubleshooting - kubernetes

I have 2 AKS clusters, Cluster 1 and Cluster 2 both running Istio 1.14 minimal out-of-the-box (default configs).
Everything on Cluster 1 works as expected (after deploying Istio).
On Cluster 2, all HTTPS outbound connections initiated from my services (injected with istio-proxy) fail.
curl http://www.google.com #works
curl https://www.google.com #fails
If I create a service entry for google, then the https curl works:
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: google
spec:
hosts:
- www.google.com
ports:
- number: 443
name: https
protocol: HTTPS
resolution: DNS
location: MESH_EXTERNAL
Both Istio installations are out-of-the-box so meshConfig.outboundTrafficPolicy.mode is set to ALLOW_ANY (double-checked).
I read online that there were some Istio bugs that would cause this behavior, but I don't think it's the case here. I also compared the Istio configs between the 2 clusters and they really seem to be the same.
I'm starting to think the problem may lie in some cluster configs because I know there are some differences between the 2 clusters here.
How would you go about troubleshooting this? Do you think the issue is related to Istio or Cluster services/configs? What should I look into first?

You are correct. By default ALLOW_ANY is value set for meshConfig.outboundTrafficPolicy.mode. This can be verified in the cluster by running below command.
kubectl get configmap istio -n istio-system -o yaml | grep -o "mode: ALLOW_ANY"
Please also refer the istio documentation for the options available in accessing external services

Related

How to make Redis work with mTLS enabled Istio cluster?

Summary
I have a simple Istio enabled k8s cluster consists of only:
A Java web server.
A Redis master instance.
Normally, the web server can read and write from Redis. However, Kiali shows a disconnected graph similar to (https://kiali.io/documentation/latest/faq/#disconnected-tcp). As a result, I tried to explicitly turn on mTLS by using STRICT mode. However, Kiali seems to continue to show disconnected graph
Set up:
Kubernetes version 1.18.0
Minikube version 1.18.0
Istio version 1.9
I followed Istio's Getting Started page to install Istio.
$ istioctl install --set profile=demo -y
$ kubectl apply -f samples/addons
Java Server code snippet (redis.clients.jedis.Jedis)
Jedis redis = new Jedis("redis-master");
redis.set(key, value);
mTLS
apiVersion: "security.istio.io/v1beta1"
kind: "PeerAuthentication"
metadata:
name: "default"
spec:
mtls:
mode: STRICT
Questions
My understanding is that by default, mTLS should have been turned on by default. Is this not the case for non-HTTP TCP traffic?
Is there anything special I need to do to enable mTLS for non-HTTP TCP traffic? (e.g. change the port on the Service to 443 from 6379? Set up a VirtualService?).
According to istio documentation you have to configure redis to make it work with istio.
Similar to other services deployed in an Istio service mesh, Redis instances need to listen on 0.0.0.0. However, each Redis slave instance should announce an address that can be used by master to reach it, which cannot also be 0.0.0.0.
Use the Redis configuration parameter replica-announce-ip to announce the correct address. For example, set replica-announce-ip to the IP address of each Redis slave instance using these steps:
Pass the pod IP address through an environment variable in the env subsection of the slave StatefulSet definition:
- name: "POD_IP"
valueFrom:
fieldRef:
fieldPath: status.podIP
Also, add the following under the command subsection:
echo "" >> /opt/bitnami/redis/etc/replica.conf
echo "replica-announce-ip $POD_IP" >> /opt/bitnami/redis/etc/replica.conf

No ExternalIP showing in kubernetes nodes?

I am running
kubectl get nodes -o yaml | grep ExternalIP -C 1
But am not finding any ExternalIP. There are various comments showing up about problems with non-cloud setups.
I am following this doc https://kubernetes.io/docs/concepts/services-networking/connect-applications-service/
with microk8s on a desktop.
If you setup k8s cluster on Cloud, Kubernetes will auto detect ExternalIP for you. ExternalIP will be a Load Balance IP address. But if you setup it on premise or on your Desktop. You can set External IP address by deploy your Load Balance, such as MetaLB.
You can get it here
In short:
From my answer Kubernetes Ingress nginx on Minikube fails.
By default all solutions like minikube does not provide you
LoadBalancer. Cloud solutions like EKS, Google Cloud, Azure do it for
you automatically by spinning in the background separate LB. Thats why
you see Pending status.
In your case most probably right decision to look into MicroK8s Add ons. There is a Add on: MetalLB:
Thanks #Matt with his MetalLB external load balancer on docker-desktop community edition on Windows 10 single-node Kubernetes Infrastructure answer ans researched info.
MetalLB Loadbalancer is a network LB implementation that tries to
“just work” on bare metal clusters.
When you enable this add on you will be asked for an IP address pool
that MetalLB will hand out IPs from:
microk8s enable metallb
For load balancing in a MicroK8s cluster, MetalLB can make use of
Ingress to properly balance across the cluster ( make sure you have
also enabled ingress in MicroK8s first, with microk8s enable ingress).
To do this, it requires a service. A suitable ingress service is
defined here:
apiVersion: v1
kind: Service
metadata:
name: ingress
namespace: ingress
spec:
selector:
name: nginx-ingress-microk8s
type: LoadBalancer
# loadBalancerIP is optional. MetalLB will automatically allocate an IP
# from its pool if not specified. You can also specify one manually.
# loadBalancerIP: x.y.z.a
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
- name: https
protocol: TCP
port: 443
targetPort: 443
You can save this file as ingress-service.yaml and then apply it with:
microk8s kubectl apply -f ingress-service.yaml
Now there is a load-balancer which listens on an arbitrary IP and
directs traffic towards one of the listening ingress controllers.

Ambassador Edge Stack Questions

I'm getting no healthy upstream error. when accessing ambassador. Pods/Services and Loadbalancer seems to be all fine and healthy. Ambassador is on top of aks.
At the moment I have got multiple services running in the Kubernetes cluster and each service has it's on Mapping with its own prefix. Is it possible to point out multiple k8s services to the same mapping so that I don't have too many prefixes? And all my k8s services will be under the same ambassador prefix?
By Default ambassador is taking me through https which is creating certificate issues, although I will be bringing https in near future for now I'm just looking to prove the concept so how can I disable HTTPS and do HTTP only ambassador?
No healthy upstream typically means that, for whatever reason, Ambassador cannot find the service listed in the mapping. The first thing I usually do when I see this is to run kubectl exec -it -n ambassador {my_ambassador_pod_name} -- sh and try to curl -v my-service where "my-service" is the Kube DNS name of the service you are trying to hit. Depending on the response, it can give you some hints on why Ambassador is failing to see the service.
Mappings work on a 1-1 basis with services. If your goal, however, is to avoid prefix usage, there are other ways Ambassador can match to create routes. One common way I've seen is to use host-based routing (https://www.getambassador.io/docs/latest/topics/using/headers/host/) and create subdomains for either individual or logical sets of services.
AES defaults to redirecting to HTTPS, but this behavior can be overwritten by applying a host with insecure routing behavior. A very simple one that I commonly use is this:
---
apiVersion: getambassador.io/v2
kind: Host
metadata:
name: wildcard
namespace: ambassador
spec:
hostname: "*"
acmeProvider:
authority: none
requestPolicy:
insecure:
action: Route
selector:
matchLabels:
hostname: wildcard

Istio ingress not working with headless service

I have deployed Kafka as a statefulset with zookeeper configured as leader selector, a headless service. Kafka is running absolutely fine as expected. However I am facing issues while configuring Istio to access kafka.
$ kubectl get pods -owide | grep -i kafka
kafka-mon-0 1/1 Running 0 3d1h <IP>
$ kubectl get svc -owide | grep -i kafka
kafka-mon-http LoadBalancer <IP> <Ext-IP> 8080:30875/TCP app=kafka-mon
kafka-mon-svc ClusterIP None <none> 8080/TCP app=kafka-mon
If I configure Istio with Kakfa LoadBalancer Service, I am able to access the UI. However, if I use a headless service, then The UI itself is not accessible. I have tested with different other services as well, same is the case.
$ kubectl get gateway,virtualservice | grep -i kafka
gateway.networking.istio.io/kafka-mon-gateway 4h
virtualservice.networking.istio.io/kafka-mon-vservice 4h
Istio works perfectly if Virtualservice configured with Load Balancer service, but not with the headless service. Please help me figure out the issue.
For Istio, I have deployed a Gateway router as internal-ingressgateway with http port- 80, https port-443 & A virtualservice with routing destination host as the Kafka-headless-service, It doesnt work, but it works if routing destination host is configured as Load Balancer service.
I am not able to troubleshoot the issue. Please suggest.
I had this issue and I fixed it by adding a Service Entry. When we use a headless svc, istio is not sure where to direct the traffic to. U can add something similiar to below.
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: kafka-se
namespace: <If any namespace>
spec:
hosts:
- kafka.default.svc.cluster.local
location: MESH_INTERNAL
ports:
- name: grpc
number: 5445
protocol: TCP
resolution: DNS

How to setup kubernetes dashboard with an ingess controller

I am very new to the idea of kubernetes.
I found some good tutorials online to get my kubernetes cluster up & running.
Right Now I want to add a kubernetes dashboard to my cluster so that it would be easy and good to have a page where I can watch how my pods and node's react (even do I'm more of a CLI guy, some GUI is not bad).
I've downloaded the dashboard pod and it's up and running. Because the kubernetes cluster is running on a Raspberry Pi cluster I've set up a NodePort to access it from outside my cluster. But I've come to some problems where I can't find any problems too online.
In my Linux host machine I can access the kubernetes dashboard but somehow in my Linux machine, my browsers won't add the cert exception.
Some people online are convinced that NodePort is not safe enough. So I've done some research on other possibilities. I am very interested in the ingress controller to connect my dashboard. But I didn't find any good and full documentation how to set up an ingress controller (and more importantly what is happening, cause there are a lot of yaml files online and they say just run this but i have no clue what he is doing).
Can someone direct me to the right documentation/tutorial / or give me any help for my Kube dashboard?
Someone in an other forum send me this tutorial which was very helpfull. I'll share it here also for all the people who've came to this post with the same question.
https://akomljen.com/kubernetes-nginx-ingress-controller/
The first thing you will need to use in the deployment is Ingress so lets start with it.
Firstly you should create an Ingress controller, you can find the Installation Guide here
The most relevant is the first part - Generic Deployment which includes following:
Namespace for the Ingress controller installation:
curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/namespace.yaml \ | kubectl apply -f -
Default backend for Ingress controller:
curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/default-backend.yaml \ | kubectl apply -f -
And config maps:
curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/configmap.yaml \ | kubectl apply -f -
curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/tcp-services-configmap.yaml \ | kubectl apply -f -
curl https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/udp-services-configmap.yaml \ | kubectl apply -f -
Because you deployed your cluster on Raspberry Pi, all of this needs to be created manually.
After the Ingress controller is installed you can deploy specific configuration for your Ingress with rules to route traffic to your service.
Here is an example of Ingress yaml file:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: endpoint-to-the-world
annotations:
ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: your-external-address-for-the-cluster
http:
paths:
- path: /console
backend:
serviceName: kubernetes-dashboard
servicePort: 443
- path: /some-other-path
backend:
serviceName: different-service
servicePort: 22
This will act like an external proxy for your cluster and you can route all traffic to any service. More details can be read here.
This should be enough to have Kubernetes Dashboard exposed.