Directing http traffic through an external proxy using istio - kubernetes

We are running a bunch of microservices in a istio enabled kubernetes cluster. One of the microservice makes a call to an external service outside of the cluster and I need to route that particular call through the company proxy that is running also external to the cluster.
To explain a bit more, say, I set the HTTP_PROXY in the container and make the curl call to http://external.com the call is success as the call is routed through the proxy but I wanted the istio to do this routing through proxy transparently.
Eg. curl http://external.com from within the container then the istio should automatically route the http call via the company proxy and return back the response
I have created service entries for both external.com and proxy.com to make the call successful

If i understood right what You are looking for is Egress Gateway.
Here is part of tutorial for configuring external HTTPS proxy from Istio documentation:
Configure traffic to external HTTPS proxy
Define a TCP (not HTTP!) Service Entry for the HTTPS proxy. Although applications use the HTTP CONNECT method to establish connections with HTTPS proxies, you must configure the proxy for TCP traffic, instead of HTTP. Once the connection is established, the proxy simply acts as a TCP tunnel.
$ kubectl apply -f - <<EOF
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: proxy
spec:
hosts:
- my-company-proxy.com # ignored
addresses:
- $PROXY_IP/32
ports:
- number: $PROXY_PORT
name: tcp
protocol: TCP
location: MESH_EXTERNAL
EOF
Send a request from the sleep pod in the default namespace. Because the sleep pod has a sidecar, Istio controls its traffic.
$ kubectl exec -it $SOURCE_POD -c sleep -- sh -c "HTTPS_PROXY=$PROXY_IP:$PROXY_PORT curl https://en.wikipedia.org/wiki/Main_Page" | grep -o "<title>.*</title>"
<title>Wikipedia, the free encyclopedia</title>
Check the Istio sidecar proxy’s logs for your request:
$ kubectl logs $SOURCE_POD -c istio-proxy
[2018-12-07T10:38:02.841Z] "- - -" 0 - 702 87599 92 - "-" "-" "-" "-" "172.30.109.95:3128" outbound|3128||my-company-proxy.com 172.30.230.52:44478 172.30.109.95:3128 172.30.230.52:44476 -
Check the access log of the proxy for your request:
$ kubectl exec -it $(kubectl get pod -n external -l app=squid -o jsonpath={.items..metadata.name}) -n external -- tail -f /var/log/squid/access.log
1544160065.248 228 172.30.109.89 TCP_TUNNEL/200 87633 CONNECT en.wikipedia.org:443 - HIER_DIRECT/91.198.174.192 -
Check out whole tutorial as it covers setup requirements and also has steps to simulate an external proxy so You can compare if it working as intended.
istio.io/docs/tasks/traffic-management/egress/http-proxy/

Related

Why sessionAffinity doesn't work on a headless service

I have the following headless service in my kubernetes cluster :
apiVersion: v1
kind: Service
metadata:
labels:
app: foobar
name: foobar
spec:
clusterIP: None
clusterIPs:
- None
internalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: foobar
sessionAffinity: ClientIP
sessionAffinityConfig:
clientIP:
timeoutSeconds: 10800
type: ClusterIP
Behind are running couple of pods managed by a statefulset.
Lets try to reach my pods individually :
Running an alpine pod to contact my pods :
> kubectl run alpine -it --tty --image=alpine -- sh
Adding curl to fetch webpage :
alpine#> add apk curl
I can curl into each of my pods :
alpine#> curl -s pod-1.foobar
hello from pod 1
alpine#> curl -s pod-2.foobar
hello from pod 2
It works just as expected.
Now I want to have a service that will loadbalance between my pods.
Let's try to use that same foobar service :
alpine#> curl -s foobar
hello from pod 1
alpine#> curl -s foobar
hello from pod 2
It works just well. At least almost : In my headless service, I have specified sessionAffinity. As soon as I run a curl to a pod, I should stick to it.
I've tried the exact same test with a normal service (not headless) and this time it works as expected. It load balances between pods at first run BUT then stick to the same pod afterwards.
Why sessionAffinity doesn't work on a headless service ?
The affinity capability is provided by kube-proxy, only connection establish thru the proxy can have the client IP "stick" to a particular pod for a period of time. In case of headless, your client is given a list of pod IP(s) and it is up to your client app. to select which IP to connect. Because the order of IP(s) in the list is not always the same, typical app. that always pick the first IP will result to connect to the backend pod randomly.

Enable SSL connection for Kubernetes Dashboard

I use this command to install and enable Kubernetes dashboard on a remote host:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.6.1/aio/deploy/recommended.yaml
kubectl proxy --address='192.168.1.132' --port=8001 --accept-hosts='^*$'
http://192.168.1.132:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/#/login
But I get:
Insecure access detected. Sign in will not be available. Access Dashboard securely over HTTPS or using localhost. Read more here .
Is it possible to enable SSL connection on the Kubernetes host so that I can access it without this warning message and enable login?
From the service definition
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
ports:
- port: 443
targetPort: 8443
selector:
k8s-app: kubernetes-dashboard
Which exposes port 443 (aka https). So it's already preconfigured. First, use https instead of http in your URL.
Then, instead of doing a kubectl proxy, why not simply
kubectl port-forward -n kubernetes-dashboard services/kubernetes-dashboard 8001:443
Access endpoint via https://127.0.0.1:8001/#/login
Now it's going to give the typical "certificate not signed" since the certificate are self signed (arg --auto-generate-certificates in deployment definition). Just skip it with your browser. See an article like https://vmwire.com/2022/02/07/running-kubernetes-dashboard-with-signed-certificates/ if you need to configure a signed certificate.
Try this
First do a port forward to your computer, this will forward 8443 of your computer (first port) to the 8443 port in the pod (the one that is exposed acording to the manifest)
kubectl port-forward pod/kubernetes-dashboard 8443:8443 # Make sure you switched to the proper namespace
In your browser go to http://localhost:8443 based on the error message it should work.
if the kubernetes dashboard pod implements ssl in its web server then go to https://localhost:8443

Kubernetes API Gateway for Microservice deployment

I am trying to understand the Kubernetes API Gateway for my Microservices. I have multiple microservices and those are deployed with the Kubernetes deployment type along with its own services.
I also have a front-end application that basically tries to communicate with the above APIs to complete the requests.
Overall, below is something I like to achieve and I like your opinions.
Is my understanding correct with the below diagram? (like Should we have API Gateway on top of all my Microservices and Web Application should use this API Gateway to reach any of those services?
If yes, How can I make that possible? I mean, I tried ISTIO Gateway and that's somehow not working.
Here is istio gateway and virtual service
On another side, below is my service (catalog service) configuration
apiVersin: v1
kind: Service
metadata:
name: catalog-api-service
namespace: local-shoppingcart-v1
labels:
version: "1.0.0"
spec:
type: NodePort
selector:
app: catalog-api
ports:
- nodePort: 30001
port: 30001
targetPort: http
protocol: TCP
name: http-catalogapi
also, at the host file (windows - driver\etc\host file) I have entries for the local DNS
127.0.0.1 kubernetes.docker.internal
127.0.0.1 localshoppingcart.com
istio service side, following screenshot
I am not sure what is going wrong but I try localhost:30139/catalog or localhost/catalog it always gives me connection refuse or connection not found error only.
If you are on the minikube you have to get the IP of minikube and port using these command as mentioned in the document
Get IP of minikube
export INGRESS_HOST=$(minikube ip)
Document
You can get Port and HTTPS port details
export INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(#.name=="http2")].nodePort}')
export SECURE_INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(#.name=="https")].nodePort}')
If you are on Docker Desktop try forwarding traffic using the kubectl
kubectl port-forward svc/istio-ingressgateway 8080:80 -n istio-system
Open
localhost:8080 in browser
Read more

Forward traffic from Kubernetes pod to local server

How can I forward traffic from a remote Kubernetes pod to a server running on my local machine? The intention is to test a remote server making calls to a service running on my local machine. I've used kubectl port-forward to forward traffic from my local service to remote service, but I need to do this the other way around.
This solution is for minikube, which is used to run Kubernetes locally, so this probably doesn't apply to me.
There's probably a "more K8s" way, but here's an idea in case you don't find anything better - use ssh.
Specifically, set up and expose SSH on the pod so it's accessible from your local machine. Then just use ssh on your machine to create a remote ssh tunnel.
For instance ssh -R 8080:localhost:80 <exposed-pod-ssh> will forward the pod's localhost:8080 to your local machine port 80.
This is a very annoying task.
With nothing special on hands, you need to redirect a port from your modem to your machine.
Most of people have non-fixed public ips, so you need to find your public ip address (you cant search for "what is my ip") and hope that it don't change between your tests.
With your public ip address you can try to access directly from pod, ex:
kubectl exec -ti alpine-pod -- curl 150.136.143.228:8080
Some clusters don't let the pod go outside from their own, so, better create an service without selectors and then add the an endpoint pointing to your ip:
apiVersion: v1
kind: Service
metadata:
name: to-my-home
spec:
ports:
- protocol: TCP
port: 80
targetPort: 8080
Then, add the endpoint:
apiVersion: v1
kind: Endpoints
metadata:
name: to-my-home
subsets:
- addresses:
- ip: 150.136.143.228
ports:
- port: 8080
Now, if you're inside in the same namespace that service, you cand do this:
kubectl exec -ti alpine-pod -- curl to-my-home
An better, and easy, solution is to use vagrant share, https://www.vagrantup.com/docs/share, but you need some steps:
Install an hypervisor (VirtualBox, Libvirt, HyperV, VMWare)
Install Vagrant
Create and account on https://app.vagrantup.com
Start a machine and use vagrant share
Grab the URL showed on command return and use on pod/service, the port is always 80
Example:
vagrant init debian/buster64
vagrant login
vagrant up
vagrant share
You could use a mesh vpn for this. Something like https://tailscale.com/

Connection Refused between Kubernetes pods in the same cluster

I am new to Kubernetes and I'm working on deploying an application within a new Kubernetes cluster.
Currently, the service running has multiple pods that need to communicate with each other. I'm looking for a general approach to go about debugging the issue, rather than getting into the specifies of the service as the question will become much too specific.
The pods within the cluster are throwing an error:
err="Get \"http://testpod.mynamespace.svc.cluster.local:8080/": dial tcp 10.10.80.100:8080: connect: connection refused"
Both pods are in the same cluster.
What are the best steps to take to debug this?
I have tried running:
kubectl exec -it testpod --namespace mynamespace -- cat /etc/resolv.conf
And this returns:
search mynamespace.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal
Which I found here: https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/
First of all, the following pattern:
my-svc.my-namespace.svc.cluster-domain.example
is applicable only to FQDNs of Services, not Pods which have the following form:
pod-ip-address.my-namespace.pod.cluster-domain.example
e.g.:
172-17-0-3.default.pod.cluster.local
So in fact you're querying cluster dns about FQDN of the Service named testpod and not about FQDN of the Pod. Judging by the fact that it's being resolved successfully, such Service already exists in your cluster but most probably is misconfigured. The fact that you're getting the error message connection refused can mean the following:
your Service FQDN testpod.mynamespace.svc.cluster.local has been successfully resolved
(otherwise you would receive something like curl: (6) Could not resolve host: testpod.default.svc.cluster.local)
you've reached successfully your testpod Service
(otherwise, i.e. if it existed but wasn't listening on 8080 port, you're trying to connect to, you would receive timeout e.g. curl: (7) Failed to connect to testpod.default.svc.cluster.local port 8080: Connection timed out)
you've reached the Pod, exposed by testpod Service (you've been sussessfully redirected to it by the testpod Service)
but once reached the Pod, you're trying to connect to incorect port and that's why the connection is being refused by the server
My best guess is that your Pod in fact listens on different port, like 80 but you exposed it via the ClusterIP Service by specifying only --port value e.g. by:
kubectl expose pod testpod --port=8080
In such case both --port (port of the Service) and --targetPort (port of the Pod) will have the same value. In other words you've created a Service like the one below:
apiVersion: v1
kind: Service
metadata:
name: testpod
spec:
ports:
- protocol: TCP
port: 8080
targetPort: 8080
And you probably should've exposed it either this way:
kubectl expose pod testpod --port=8080 --targetPort=80
or with the following yaml manifest:
apiVersion: v1
kind: Service
metadata:
name: testpod
spec:
ports:
- protocol: TCP
port: 8080
targetPort: 80
Of course your targetPort may be different than 80, but connection refused in such case can mean only one thing: target http server (running in a Pod) refuses connection to 8080 port (most probably because it isn't listening on it). You didn't specify what image you're using, whether it's a standard nginx webserver or something based on your custom image. But if it's nginx and wasn't configured differently it listens on port 80.
For further debug, you can attach to your Pod:
kubectl exec -it testpod --namespace mynamespace -- /bin/sh
and if netstat command is not present (the most likely scenario) run:
apt update && apt install net-tools
and then check with netstat -ntlp on which port your container listens on.
I hope this helps you solve your issue. In case of any doubts, don't hesitate to ask.