Open other ports more than HTTP & HTTPS in Traefik Kubernetes Ingress - kubernetes

I've gotten up Traefik as an Ingress in Kubernetes with this configuration: https://github.com/RedxLus/traefik-simple-kubernetes/tree/master/V1.7
And works well to HTTP and HTTPS but I don't know how can open others ports to forward, for example, a Pod with an Ingress with MySQL in port 3306
Thanks for every answer!

Traefik doesn't support it if you are using an Ingress resource and that resource doesn't support L4 type of traffic like mentioned in the other answer.
But if you are using an Nginx ingress controller there is a workaround, use a ConfigMap with the ingress controller options --tcp-services-configmap and --udp-services-configmap as described here. Then your tcp-services ConfigMap would look something like this:
apiVersion: v1
kind: ConfigMap
metadata:
name: tcp-services
namespace: ingress-nginx
data:
9000: "default/example-go:8080"
The advantage of this is having a single entry point to your cluster (this applies to any ingress that would be used for TCP/UDP) but the downside is overhead of having an extra layer compared to just simply having a Kubernetes Service (NodePort or LoadBalancer) that already listens on TCP/UDP ports.

Kubernetes Ingress API does not support it. But it is possible to use Traefik as TCP proxy for your desired use-case, but only, if you make use of TLS encrypted connections. Otherwise, based on the level 4 protocol, it's not possible to distinguish between the different hostnames and you would have to use one entrypoint per TCP router. Check this issue in GitHub.

Related

Istio Virtual Service Relationship to Normal Kubernetes Service

I am watching a Pluralsight video on the Istio service mesh. One part of the presentation says this:
The VirtualService uses the Kubernetes service to find the IP addresses of all the pods. The VirtualService doesn't route any traffic through the [Kubernetes] service, but it just uses it to get the list of endpoints where the traffic could go.
And it shows this graphic (to show the pod discovery, not for traffic routing):
I am a bit confused by this because I don't know how an Istio VirtualService knows which Kubernetes Service to look at. I don't see any reference in the example Istio VirtualService yaml files to a Kubernetes Service.
I have theorized that the DestinationRules could have enough labels on them to get down to just the needed pods, but the examples only use the labels v1 and v2. It seems unlikely that a version alone will give only the needed pods. (Many different Services could be on v1 or v2.)
How does an Istio VirtualService know which Kubernetes Service to associate to?
or said another way,
How does an Istio VirtualService know how to find the correct pods from all the pods in the cluster?
When creating a VitualService you define which service to find in route.destination section
port : service running on port
host : name of the service
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: test
spec:
hosts:
- "example.com"
gateways:
- test-gateway
http:
- match:
- uri:
prefix: /
route:
- destination:
port:
number: 80
host: app-service
so,
app-pod/s -> (managed by) app-service -> test virtual service
Arfat's answer is correct.
I want to add the following part from the docs about the host, which should make things even more clear.
https://istio.io/latest/docs/reference/config/networking/virtual-service/#VirtualService
[...] Note for Kubernetes users: When short names are used (e.g. “reviews” instead of “reviews.default.svc.cluster.local”), Istio will interpret the short name based on the namespace of the rule, not the service. A rule in the “default” namespace containing a host “reviews” will be interpreted as “reviews.default.svc.cluster.local”, irrespective of the actual namespace associated with the reviews service. To avoid potential misconfigurations, it is recommended to always use fully qualified domain names over short names.
So when you write host: app-service and the VirtualService is in the default namespace, the host is interpreted as app-service.default.svc.cluster.local, which is the FQDN of the kubernetes service. If the app-service is in another namespace, say dev, you need to set the host as host: app-service.dev.svc.cluster.local.
Same goes for DestinationRule, where the FQDN of a kubernetes service is defined as host, as well.
https://istio.io/latest/docs/reference/config/networking/destination-rule/#DestinationRule
VirtualService and DestinationRule are configured for a host. The VirtualService defines where the traffic should go (eg host, weights for different versions, ...) and the DestinationRule defines, how the traffic should be handled, (eg load balancing algorithm and how are the versions defined.
So traffic is not routed like this
Gateway -> VirtualService -> DestinationRule -> Service -> Pod, but like
Gateway -> Service, considering the config from VirtualService and DestinationRule.

Can Ingress Controllers use Selector based rules?

I have deployed a statefulset in AKS - My goal is to load balance traffic to my statefulset.
From my understanding I can define a LoadBalancer Service that can route traffic based on Selectors, something like this.
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
type: LoadBalancer
ports:
- port: 80
name: web
selector:
app: nginx
However I don't want to necessarily go down the LoadBalance route and I would prefer Ingress doing this work for me, My question is can any of the ingress controller support routing rules which can do Path based routing to endpoints based on selectors? Instead of routing to another service.
Update
To elaborate more on the scenario - Each pod in my statefulset is a stateless node doing data processing of a HTTP feed. I want my ingress service to be able to load balance traffic across these statefulset pods ( honoring keep-alives etc), however given the nature of statefulsets in k8s they are currently exposed through a headless service. I am not sure if a headless service can load balance traffic to my statefulsets?
Update 2
Quick search reveals headless service does not loadbalance
Sometimes you don't need load-balancing and a single Service IP. In this case, you can create what are termed "headless" Services, by explicitly specifying "None" for the cluster IP (.spec.clusterIP).
As much i know it's not possible to do the selector-based routing with ingress.
selector based routing is mostly used during a Blue-green deployment or canary deployment you can only achieve this by using the service mesh. You can use any of the service mesh like istio or APP mesh and you can do the selector base routing.
I have deployed a statefulset in AKS - My goal is to load balance
traffic to my statefulset.
if your goal is to just load balance traffic you can use the ingress controller maybe still not sure about scenrio you are trying to explain.
By default kubernetes service also Load balance the traffic across the PODs.
Flow will be something like DNS > ingress > ingress controller > Kubernetes service (Load balancing here) > any of statefulset
+1 to Harsh Manvar's answer but let me add also my 3 cents.
My question is can any of the ingress controller support routing rules
which can do Path based routing to endpoints based on selectors?
Instead of routing to another service.
To the best of my knowledge, the answer to your question is no, it can't as it doesn't even depend on a particular ingress controller implementation. Note that various ingress controllers, no matter how different they may be when it comes to implementation, must conform to the general specification of the ingress resource, described in the official kubernetes documentation. You don't have different kinds of ingresses, depending on what controller is used.
Ingress and Service work on a different layer of abstraction. While Service exposes a set of pods using a selector e.g.:
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: MyApp 👈
path-based routing performed by Ingress is always done between Services:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: minimal-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /testpath
pathType: Prefix
backend:
service:
name: test 👈
port:
number: 80
I am not sure if a headless service can load balance traffic to my statefulsets?
The first answer is "no". Why?
k8s Service is implemented by the kube-proxy. Kube-proxy itself can work in two modes:
iptables (also known as netfilter)
ipvs (also known as LVS/Linux Virtual Server)
load balancing in case of iptables mode is a NAT iptables rule: from ClusterIP address to the list of Endpoints
load balancing in case of ipvs mode is a VIP (LVS Virtual IP) with the Endpoints as upstreams
So, when you create k8s Service with clusterIP set to None you are exactly saying:
"I need this service WITHOUT load balancing"
Setting up the clusterIP to None causes kube-proxy NOT TO CREATE NAT rule in iptables mode, VIP in ipvs mode. There will be nothing for traffic load balancing across the pods selected by this particular Service selector
The second answer is "it could be". Why?
You are free to create headless Service with desired pods selector. DNS query to this Service will return the list of DNS A records for selected pods. Then you can use this data to implement load balancing YOUR way

force http to https on GKE ingress cloud loadbalancer [duplicate]

Is there a way to force an SSL upgrade for incoming connections on the ingress load-balancer? Or if that is not possible with, can I disable port :80? I haven't found a good documentation pages that outlines such an option in the YAML file. Thanks a lot in advance!
https://github.com/kubernetes/ingress-gce#frontend-https
You can block HTTP through the annotation kubernetes.io/ingress.allow-http: "false" or redirect HTTP to HTTPS by specifying a custom backend. Unfortunately GCE doesn't handle redirection or rewriting at the L7 layer directly for you, yet. (see https://github.com/kubernetes/ingress-gce#ingress-cannot-redirect-http-to-https)
Update: GCP now handles redirection rules for load balancers, including HTTP to HTTPS. There doesn't appear to be a method to create these through Kubernetes YAML yet.
This was already correctly answered by a comment on the accepted answer. But since the comment is buried I missed it several times.
As of GKE version 1.18.10-gke.600 you can add a k8s frontend config to redirect from http to https.
https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#https_redirect
apiVersion: networking.gke.io/v1beta1
kind: FrontendConfig
metadata:
name: ssl-redirect
spec:
redirectToHttps:
enabled: true
# add below to ingress
# metadata:
# annotations:
# networking.gke.io/v1beta1.FrontendConfig: ssl-redirect
The annotation has changed:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test
annotations:
kubernetes.io/ingress.allow-http: "false"
spec:
...
Here is the annotation change PR:
https://github.com/kubernetes/contrib/pull/1462/files
If you are not bound to the GCLB Ingress Controller you could have a look at the Nginx Ingress Controller. This controller is different to the builtin one in multiple ways. First and foremost you need to deploy and manage one by yourself. But if you are willing to do so, you get the benefit of not depending on the GCE LB (20$/month) and getting support for IPv6/websockets.
The documentation states:
By default the controller redirects (301) to HTTPS if TLS is enabled for that ingress . If you want to disable that behaviour globally, you
can use ssl-redirect: "false" in the NGINX config map.
The recently released 0.9.0-beta.3 comes with an additional annotation for explicitly enforcing this redirect:
Force redirect to SSL using the annotation ingress.kubernetes.io/force-ssl-redirect
Google has responded to our requests and is testing HTTP->HTTPS SSL redirection on their load balancers. Their latest answer said it should be in Alpha sometime before the end of January 2020.
Their comment:
Thank you for your patience on this issue. The feature is currently in testing and we expect to enter Alpha phase before the end of January. Our PM team will have an announcement with more details as we get closer to the Alpha launch.
My fingers are crossed that we'll have a straightforward solution to this very common feature in the near future.
UPDATE (April 2020):
HTTP(S) rewrites is now a Generally Available feature. It's still a bit rough around the edges and does not work out-of-the-box with the GCE Ingress Controller unfortunately. But time will tell and hopefully a native solution will appear.
A quick update. Here
Now a FrontEndConfig can be make to configure the ingress. Hopes it helps.
Example:
apiVersion: networking.gke.io/v1beta1
kind: FrontendConfig
metadata:
name: my-frontend-config
spec:
redirectToHttps:
enabled: true
responseCodeName: 301
You'll need to make sure that your load balancer supports HTTP and HTTPS
Worked on this for a long time. In case anyone isn't clear on the post above. You would rebuild your ingress with annotation -- kubernetes.io/ingress.allow-http: "false” --
Then delete your ingress and redeploy. The annotation will have the ingress only create a LB for 443, instead of both 443 and 80.
Then you do a compute HTTP LB, not one for GKE.
Gui directions:
Create a load balancer and choose HTTP(S) Load Balancing -- Start configuration.
choose - From Internet to my VMs and continue
Choose a name for the LB
leave the backend configuration blank.
Under Host and path rules, select Advanced host and path rules with the action set to
Redirect the client to different host/path.
Leave the Host redirect field blank.
Select Prefix Redirect and leave the Path value blank.
Chose the redirect response code as 308.
Tick the Enable box for HTTPS redirect.
For the Frontend configuration, leave http and port 80, for ip address select the static
IP address being used for your GKE ingress.
Create this LB.
You will now have all http traffic go to this and 308 redirect to your https ingress for GKE. Super simple config setup and works well.
Note: If you just try to delete the port 80 LB that GKE makes (not doing the annotation change and rebuilding the ingress) and then adding the new redirect compute LB it does work, but you will start to see error messages on your Ingress saying error 400 invalid value for field 'resource.ipAddress " " is in use and would result in a conflict, invalid. It is trying to spin up the port 80 LB and can't because you already have an LB on port 80 using the same IP. It does work but the error is annoying and GKE keeps trying to build it (I think).
Thanks to the comment of #Andrej Palicka and according to the page he provided: https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-features#https_redirect now I have an updated and working solution.
First we need to define a FrontendConfig resource and then we need to tell the Ingress resource to use this FrontendConfig.
Example:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: myapp-app-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: myapp-prd
networking.gke.io/managed-certificates: managed-cert
kubernetes.io/ingress.class: "gce"
networking.gke.io/v1beta1.FrontendConfig: myapp-frontend-config
spec:
defaultBackend:
service:
name: myapp-app-service
port:
number: 80
---
apiVersion: networking.gke.io/v1beta1
kind: FrontendConfig
metadata:
name: myapp-frontend-config
spec:
redirectToHttps:
enabled: true
responseCodeName: MOVED_PERMANENTLY_DEFAULT
You can disable HTTP on your cluster (note that you'll need to recreate your cluster for this change to be applied on the load balancer) and then set HTTP-to-HTTPS redirect by creating an additional load balancer on the same IP address.
I spend couple of hours on the same question, and ended up doing what I've just described. It works perfectly.
Redirecting to HTTPS in Kubernetes is somewhat complicated. In my experience, you'll probably want to use an ingress controller such as Ambassador or ingress-nginx to control routing to your services, as opposed to having your load balancer route directly to your services.
Assuming you're using an ingress controller, then:
If you're terminating TLS at the external load balancer and the LB is running in L7 mode (i.e., HTTP/HTTPS), then your ingress controller needs to use X-Forwarded-Proto, and issue a redirect accordingly.
If you're terminating TLS at the external load balancer and the LB is running in TCP/L4 mode, then your ingress controller needs to use the PROXY protocol to do the redirect.
You can also terminate TLS directly in your ingress controller, in which case it has all the necessary information to do the redirect.
Here's a tutorial on how to do this in Ambassador.

IP Blacklisting in Istio

The IP whitelisting/blacklisting example explained here https://kubernetes.io/docs/tutorials/services/source-ip/ uses source.ip attribute. However, in kubernetes (kubernetes cluster running on docker-for-desktop) source.ip returns the IP of kube-proxy. A suggested workaround is to use request.headers["X-Real-IP"], however it doesn't seem to work and returns kube-proxy IP in docker-for-desktop in mac.
https://github.com/istio/istio/issues/7328 mentions this issue and states:
With a proxy that terminates the client connection and opens a new connection to your nodes/endpoints. In such cases the source IP will always be that of the cloud LB, not that of the client.
With a packet forwarder, such that requests from the client sent to the loadbalancer VIP end up at the node with the source IP of the client, not an intermediate proxy.
Loadbalancers in the first category must use an agreed upon protocol between the loadbalancer and backend to communicate the true client IP such as the HTTP X-FORWARDED-FOR header, or the proxy protocol.
Can someone please help how can we define a protocol to get the client IP from the loadbalancer?
Maybe your are confusing with kube-proxy and istio, by default Kubernetes uses kube-proxy but you can install istio that injects a new proxy per pod to control the traffic in both directions to the services inside the pod.
With that said you can install istio on your cluster and enable it for only the services you need and apply a blacklisting using the istio mechanisms
https://istio.io/docs/tasks/policy-enforcement/denial-and-list/
To make a blacklist using the source IP we have to leave istio manage how to fetch the source IP address and use som configuration like this taken from the docs:
apiVersion: config.istio.io/v1alpha2
kind: handler
metadata:
name: whitelistip
spec:
compiledAdapter: listchecker
params:
# providerUrl: ordinarily black and white lists are maintained
# externally and fetched asynchronously using the providerUrl.
overrides: ["10.57.0.0/16"] # overrides provide a static list
blacklist: false
entryType: IP_ADDRESSES
---
apiVersion: config.istio.io/v1alpha2
kind: instance
metadata:
name: sourceip
spec:
compiledTemplate: listentry
params:
value: source.ip | ip("0.0.0.0")
---
apiVersion: config.istio.io/v1alpha2
kind: rule
metadata:
name: checkip
spec:
match: source.labels["istio"] == "ingressgateway"
actions:
- handler: whitelistip
instances: [ sourceip ]
---
You can use the param providerURL to maintain an external list.
Also check to use externalTrafficPolicy: Local on the ingress-gateway servce of istio.
As per comments my last advice is to use a different ingress-controller to avoid the use of kube-proxy, my recomendation is to use the nginx-controller
https://github.com/kubernetes/ingress-nginx
You can configure this ingress as a regular nginx acting as a proxy

Preserving remote client IP with Ingress

My goal is to make my web application (deployed on Kubernetes 1.4 cluster) see the IP of the client that originally made the HTTP request. As I'm planning to run the application on a bare-metal cluster, GCE and the service.alpha.kubernetes.io/external-traffic: OnlyLocal service annotation introduced in 1.4 is not applicable for me.
Looking for alternatives, I've found this question which is proposing to set up an Ingress to achieve my goal. So, I've set up the Ingress and the NginX Ingress Controller. The deployment went smoothly and I was able to connect to my web app via the Ingress Address and port 80. However in the logs I still see cluster-internal IP (from 172.16.0.0/16) range - and that means that the external client IPs are not being properly passed via the Ingress. Could you please tell me what do I need to configure in addition to the above to make it work?
My Ingress' config:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: myWebApp
spec:
backend:
serviceName: myWebApp
servicePort: 8080
As a layer 4 proxy, Nginx cannot retain the original source IP address in the actual IP packets. You can work around this using the Proxy protocol (the link points to the HAProxy documentation, but Nginx also supports it).
For this to work however, the upstream server (meaning the myWebApp service in your case) also needs to support this protocol. In case your upstream application also uses Nginx, you can enable proxy protocol support in your server configuration as documented in the official documentation.
According to the Nginx Ingress Controller's documentation, this feature can be enabled in the Ingress Controller using a Kubernetes ConfigMap:
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-ingress-controller
data:
use-proxy-protocol: "true"
Specify the name of the ConfigMap in your Ingress controller manifest, by adding the --nginx-configmap=<insert-configmap-name> flag to the command-line arguments.