How do I make it accessible from outside my local k8s through traefik - mongodb

I'm messing around with kubernetes and I've set up a kluster on my local pc using kind. I have also installed traefik as an ingress controller, and I have already managed to access an api that I have deployed in the kluster and a grafana through some ingress (without doing port forwards or anything like that). But with mongo I can't. While the API and grafana need an IngressRoute, mongo needs an IngressRouteTCP
The IngressRouteTCP that I have defined is this:
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRouteTCP
metadata:
name: mongodb-ingress-tcp
namespace: mongo_namespace
spec:
entryPoints:
- web
routes:
- match: HostSNI(`mongo.localhost`)
services:
- name: mongodb
port: 27017
But I get this error:
I know i can use a port forward, but I want do in this way (with ingress)
A lot of thanks

You need to specify tls parameters
like this
tls:
certResolver: "bar"
domains:
- main: "snitest.com"
sans:
- "*.snitest.com"
To avoid using TLS u need to match all routes HostSNI(`*`)
It is important to note that the Server Name Indication is an extension of the TLS protocol.
Hence, only TLS routers will be able to specify a domain name with that rule.
However, there is one special use case for HostSNI with non-TLS routers: when one wants a non-TLS router that matches all (non-TLS) requests, one should use the specific HostSNI(`*`) syntax.
DOCS

Related

How to use Istio Service Entry object to connect two services in different namespace?

I have two services, say svcA and svcB that may sit in different namespaces or even in different k8s clusters. I want to configure the services so that svcA can refer to svcB using some constant address, then deploy an Istio Service Entry object depending on the environment to route the request. I will use Helm to do the deployment, so using a condition to choose the object to deploy is easy.
If svcB is in a completely different cluster, it is just like any external server and is easy to configure.
But when it is in a different namespace on the same cluster, I just could not get the Service Entry work. Maybe I don't understand all the options it provides.
Istio objects
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: demo-gateway
spec:
selector:
istio: ingressgateway # use Istio default gateway implementation
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1beta1
kind: ServiceEntry
metadata:
name: svcB-se
spec:
hosts:
- svcB.alias
ports:
- number: 80
name: http
protocol: HTTP2
location: MESH_INTERNAL
resolution: svcB.svcb-ns.svc.cluster.local
Update
After doing some random/crazy test, I found that the alias domain name must ends with well know suffix, like .com, .org, arbitrary suffix, like .svc, .alias, won't work.
If I update the above ServiceEntry object to like this. My application works.
apiVersion: networking.istio.io/v1beta1
kind: ServiceEntry
metadata:
name: svcB-se
spec:
hosts:
- svcB.com
ports:
- number: 80
name: http
protocol: HTTP2
location: MESH_INTERNAL
resolution: svcB.svcb-ns.svc.cluster.local
I searched for a while and checked the Istio documentations, but could not find any reference about domain name suffix restrictions.
Is it implicit knowledge that only domain names like .com and .org are valid? I have left school for a long time.
I have posted community wiki answer to summarize the topic and paste explanation of the problem:
After doing some random/crazy test, I found that the alias domain name must ends with well know suffix, like .com, .org, arbitrary suffix, like .svc, .alias, won't work.
If I update the above ServiceEntry object to like this. My application works.
apiVersion: networking.istio.io/v1beta1
kind: ServiceEntry
metadata:
name: svcB-se
spec:
hosts:
- svcB.com
ports:
- number: 80
name: http
protocol: HTTP2
location: MESH_INTERNAL
resolution: svcB.svcb-ns.svc.cluster.local
I searched for a while and checked the Istio documentations, but could not find any reference about domain name suffix restrictions.
Is it implicit knowledge that only domain names like .com and .org are valid? I have left school for a long time.
Explanation:
You can find ServiceEntry requirements in the offical documentation. You can find description how you can properly set this value:
The hosts associated with the ServiceEntry. Could be a DNS name with wildcard prefix.
The hosts field is used to select matching hosts in VirtualServices and DestinationRules.
For HTTP traffic the HTTP Host/Authority header will be matched against the hosts field.
For HTTPs or TLS traffic containing Server Name Indication (SNI), the SNI value will be matched against the hosts field.
NOTE 1: When resolution is set to type DNS and no endpoints are specified, the host field will be used as the DNS name of the endpoint to route traffic to.
NOTE 2: If the hostname matches with the name of a service from another service registry such as Kubernetes that also supplies its own set of endpoints, the ServiceEntry will be treated as a decorator of the existing Kubernetes service. Properties in the service entry will be added to the Kubernetes service if applicable. Currently, the only the following additional properties will be considered by istiod:
subjectAltNames: In addition to verifying the SANs of the service accounts associated with the pods of the service, the SANs specified here will also be verified.
Based on this issue you don't have to use FQDN in your hosts field, but you need to set proper value to select matching hosts in VirtualServices and DestinationRules.

what is the disadvantage using hostSNI(*) in traefik TCP route mapping

Now I am using HostSNI(*) to mapping the TCP service like mysql\postgresql... in traefik 2.2.1 in Kubernetes cluster v1.18 . beacuse I am in my local machine and did not have a valid certification. This is the config:
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRouteTCP
metadata:
name: mysql-ingress-tcp-route
namespace: middleware
spec:
entryPoints:
- mysql
routes:
- match: HostSNI(`*`)
services:
- name: report-mysqlha
port: 3306
is config works fine in my local machine. But I still want to know the side effect to using
HostSNI() mapping stratege. What is the disadvantege to using HostSNI() not a domain name? Is it possible to using a fake domain name in my local machine?
For those needing an example of TCP with TLS passthrough and SNI routing
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRouteTCP
metadata:
name: test-https
namespace: mynamespace
spec:
entryPoints:
- websecure # maps to port 443 by default
routes:
- match: HostSNI(`my.domain.com`)
services:
- name: myservice
port: 443
tls:
passthrough: true
As of the latest Traefik docs (2.4 at this time):
If both HTTP routers and TCP routers listen to the same entry points, the TCP routers will apply before the HTTP routers
It is important to note that the Server Name Indication is an extension of the TLS protocol. Hence, only TLS routers will be able to specify a domain name with that rule. However, non-TLS routers will have to explicitly use that rule with * (every domain) to state that every non-TLS request will be handled by the router.
Therefore, to answer your questions:
Using HostSNI(`*`) is the only reasonable way to use an ingressRouteTCP without tls -- since you're explicitly asking for a TCP router and TCP doesn't speak TLS.
I've had mixed success with ingressRouteTCP and HostSNI(`some.fqdn.here`) with a tls: section, but it does appear to be a supported configuration as per 2
One possible "disadvantage" to this (airquotes because it's subjective) is: This configuration means that any traffic that routes to your entrypoint (i.e. mysql) will be routed via this ingressRouteTCP
Consider: if for some reason you had another ingressRoute with the same entrypoint, the ingressRouteTCP would take precedence as per 1
Consider: if, for example you wanted to route multiple different mysql services via the same entrypoint: mysql, you wouldn't be able to based on this configuration

Can a `ServiceEntry` be applied to only 1 service?

We have a cluster with Istio, but there is this one condition, I can't find how to fulfill.
We need one of the services to have certain restrictions within the mesh as well, and to talk to one external endpoint. Through Sidecar object, I should be able to set the restrictions internally, but I don't know how to restrict to one external endpoint.
I can set the external endpoint in the Sidecar object as well, but I have to create a ServiceEntry anyways, in which case all the services can talk to that external endpoint.
It seems that what I need is to set a ServiceEntry for one specific service, but this is not possible. Is there any other way to achieve this?
I asked this question on GitHub; to Istio team, and the only way to achieve this is putting the service in a different namespace, and make the ServiceEntry to apply to the workloads only in that namespace through exportTo parameter.
The ServiceEntry would look like this:
apiVersion: networking.istio.io/v1beta1
kind: ServiceEntry
metadata:
name: se-demo
spec:
exportTo:
- . # with ".", we are saying the ServiceEntry to only apply to the workloads in the same namespace.
hosts:
- www.google.com
location: MESH_EXTERNAL
ports:
- name: https
number: 443
protocol: HTTPS
resolution: DNS

mutual TLS based on specific IP

I'm trying to configure nginx-ingress for mutual TLS but only for specific remote address. I tried to use snippet but no success:
nginx.ingress.kubernetes.io/configuration-snippet: |
if ($remote_addr = 104.214.x.x) {
auth-tls-verify-client: on;
auth-tls-secret: namespace/nginx-ca-secret;
auth-tls-verify-depth: 1;
auth-tls-pass-certificate-to-upstream: false;
}
The auth-tls annotations work when applied as annotations, but inside the snippet they don't.
Any idea how to configure this or maybe a workaround to make it work?
The job of mTLS is basically restricting access to a service by requiring the client to present a certificate. If you expose a service and then require only clients with specific IP addresses to present a certificate, the entire rest of the world can still access your service without a certificate, which completely defeats the point of mTLS.
If you want more info, here is a good article that explains why TLS and mTLS exist and what is the difference between them.
There are two ways to make a sensible setup out of this:
Just use regular TLS instead of mTLS
Make a service in your cluster require mTLS to access it regardless of IP addresses
If you go for option 2, you need to configure the service itself to use mTLS, and then configure ingress to pass through the client certificate to the service. Here's a sample configuration for nginx ingress that will work with a service that expects mTLS:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: mtls-sample
annotations:
nginx.ingress.kubernetes.io/backend-protocol: "https"
nginx.ingress.kubernetes.io/auth-tls-pass-certificate-to-upstream: "true"
spec:
rules:
- http:
paths:
- path: /hello
backend:
serviceName: mtls-svc
servicePort: 443

Kubernetes path based routing for multiple namespaces

The environment: I have a kubernetes cluster set up with namespaces for "dev", "sit" and "prod". In each of these namespaces i have multiple services of type:LoadBalancer which target a specific deployment of a dockerised application (i have multiple applications) so i can access each of these by just using the exposed ip address of the service of whichever namespace i want. Example service looks like this an is very simple:
apiVersion: v1
kind: Service
metadata:
name: application1
spec:
ports:
- port: 80
targetPort: 3000
protocol: TCP
name: http
type: LoadBalancer
selector:
app: application1
The problem: I now want to be able to support multiple versions of all applications (ip:/v1/, ip:/v2/ etc) as to allow the users to migrate to the new version when they are ready and i've been trying to implement path-based routing following this guide. I have managed to restructure my architecture so that i have ReplicationControllers and an ingress which looks at the rules of the path to route to the correct service.
This seems to work if i'd only have one exposed service and a single namespace because i only have DNS host names for production environment and want to use the individual ip address of a service for other environments and i can't figure out how to specify the ingress rules for a service which doesn't have a hostname.
I could just have a loadbalancer for every environment and use path based routing to route to each different services for dev and sit which is not ideal because to access any service we'd have to now use something like this ip/application1 and ip/application2 instead of directly using the service ip address of each application. But my biggest problem is that when i followed the guide and created the ingress, replicationController and a service in my SIT namespace it started affecting the loadbalancer services in my other two environments (as i understand the kubernetes would sometimes try to use the nginx controller from SIT environment on my DEV services and therefore would fail, other times it would use the GCE default configuration and would work).
I tried adding the arg "- --watch-namespace=sit" to limit the scope of the ingress controller to only affect sit but it does not seem to work.
I now want to be able to support multiple versions of all applications (ip:/v1/, ip:/v2/ etc.)
That is exactly what Ingress can do, but the problem is that you want to use IP addresses for routing, but Ingress is using DNS names for that.
I think the best way to implement this is to use an Ingress which will handle requests. On GCE Ingress uses the HTTP(S) load balancer. Yes, you will need a DNS name for that, but it will help you to create a routing which you need.
Also, I highly recommend using TLS encryption for connections.
You can check LetsEncrypt to get a free SSL certificate.
So, the solution should like below:
1. Deploy your Services with type "ClusterIP" instead of "LoadBalancer". You can have more than one Service object for an application so you can do it in parallel with your current configuration.
2. Select any namespace (even special one), for instance - "ingress-ns". We need to create there Service objects which will point to your services in other namespaces. Here is an example of a service (let new DNS name be "my.shiny.new.domain"):
kind: Service
apiVersion: v1
metadata:
name: service-v1
namespace: ingress-ns
spec:
type: ExternalName
externalName: <service>.<namespace>.svc.cluster.local # here is a service name and namespace of your service with version v1.
ports:
- port: 80
3. Now, we have a namespace with several services which are pointing to different versions of your application in different namespaces. Now, we can create an Ingress object which will create an HTTP(S) Load Balancer on GCE with path-based routing:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test
namespace: ingress-ns
spec:
rules:
- host: my.shiny.new.domain
http:
paths:
- path: /v1
backend:
serviceName: service-v1
servicePort: 80
- path: /v2
backend:
serviceName: service-v2
servicePort: 80
Kubernetes will create a new HTTP(S) balancer with rules you set up in an Ingress object, and you will have an entry point with cross-namespaces path-based routing, and you don't have to use multiple IP addresses for that.
Actually, you can also manage by that ingress your primary version of an application and use your primary domain with "/" path to handle requests to your production version.