Using Gloo TCP Proxy to forward port 27017 for MongoDB access in a Kubernetes cluster.
The following Gateway spec works for forwarding all port 27017 traffic to the specified upstream.
spec:
bindAddress: '::'
bindPort: 27017
tcpGateway:
tcpHosts:
- destination:
single:
upstream:
name: default-mongodb-27017
namespace: gloo-system
name: one
useProxyProto: false
I would like to forward 27017 traffic based on hostname (for example, d.db.example.com points to the dev instance of Mongo and p.db.example.com points to the prod instance).
Is there a way to specify hostname (like in a virtual service route)?
(Note: This is for a educational simulation, and as such isn't a real "production" environment. This is why both the dev and prod instance will exist in the same Kubernetes cluster. This is also why a managed or external MongoDB solution isn't used)
As I mentioned in comments, as far as I know it´s not posibble to do in gateway,atleast I could not find anything about that in gateway documentation, but you can configure virtual services to make it work.
As mentioned in documentation there
The VirtualService is the root Routing object for the Gloo Gateway. A virtual service describes the set of routes to match for a set of domains.
It defines: - a set of domains - the root set of routes for those domains - an optional SSL configuration for server TLS Termination - VirtualHostOptions that will apply configuration to all routes that live on the VirtualService.
Domains must be unique across all virtual services within a gateway (i.e. no overlap between sets).
And there
Virtual Services define a set of route rules, an optional SNI configuration for a given domain or set of domains.
Gloo will select the appropriate virtual service (set of routes) based on the domain specified in a request’s Host header (in HTTP 1.1) or :authority header (HTTP 2.0).
Virtual Services support wildcard domains (starting with *).
Gloo will create a default virtual service for the user if the user does not provide one. The default virtual service matches the * domain, which will serve routes for any request that does not include a Host/:authority header, or a request that requests a domain that does not match another virtual service.
The each domain specified for a virtualservice must be unique across the set of all virtual services provided to Gloo.
Take a look at this tutorial.
And more specifically at this example where they use 2 domains, echo.example.com and foxtrot.example.com, in your case that would be d.db.example.com and p.db.example.com
Option 2: Separating ownership across domains
The first alternative we might consider is to model each service with different domains, so that the routes are managed on different objects. For example, if our primary domain was example.com, we could have a virtual service for each subdomain: echo.example.com and foxtrot.example.com.
apiVersion: gateway.solo.io/v1
kind: VirtualService
metadata:
name: echo
namespace: echo
spec:
virtualHost:
domains:
- 'echo.example.com'
routes:
- matchers:
- prefix: /echo
---
apiVersion: gateway.solo.io/v1
kind: VirtualService
metadata:
name: foxtrot
namespace: foxtrot
spec:
virtualHost:
domains:
- 'foxtrot.example.com'
...
I hope this helps.
Related
I have some services defined in my traefik config file like so
services:
serviceA:
loadBalancer:
servers:
- url: http://serviceA:8080
serviceB:
loadBalancer:
servers:
- url: http://serviceB:8080
( more services here...)
Services are in docker containers. I want a certain endpoint in serviceB to be only accessible internally.
http:
routers:
to-admin:
rule: "Host(`{{env "MYHOST"}}`) && PathPrefix(`/serviceB/criticalEndpoint`)"
service: serviceB
middlewares:
- ?
I saw there's a middleware for IP whitelisting, but what IP could I use so all external access to this endpoint is forbidden while the rest of endpoints on the service are public?
I believe what you are looking for is IPWhiteList middleware which you can attach to your service, so it will intercept every request to that service and allow/deny based on the client IP address.
So if you want the service to be exposed internally, you can give the CIDR range of your VPC which will include all possible internal IP addresses.
Docker example
# Accepts request from defined IP
labels:
- "traefik.http.middlewares.test-ipwhitelist.ipwhitelist.sourcerange=127.0.0.1/32, 192.168.1.7"
Kubernetes example
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
name: test-ipwhitelist
spec:
ipWhiteList:
sourceRange:
- 127.0.0.1/32
- 192.168.1.7
sourceRange : The sourceRange option sets the allowed IPs (or ranges of allowed IPs by using CIDR notation).
ipStrategy : The ipStrategy option defines two parameters that set how Traefik determines the client IP: depth, and excludedIPs.
ipStrategy.depth: The depth option tells Traefik to use the X-Forwarded-For header and take the IP located at the depth position (starting from the right).
If depth is greater than the total number of IPs in X-Forwarded-For, then the client IP will be empty.
depth is ignored if its value is less than or equal to 0.
Consider the following screenshot
Reference Traefik Docs
I'm messing around with kubernetes and I've set up a kluster on my local pc using kind. I have also installed traefik as an ingress controller, and I have already managed to access an api that I have deployed in the kluster and a grafana through some ingress (without doing port forwards or anything like that). But with mongo I can't. While the API and grafana need an IngressRoute, mongo needs an IngressRouteTCP
The IngressRouteTCP that I have defined is this:
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRouteTCP
metadata:
name: mongodb-ingress-tcp
namespace: mongo_namespace
spec:
entryPoints:
- web
routes:
- match: HostSNI(`mongo.localhost`)
services:
- name: mongodb
port: 27017
But I get this error:
I know i can use a port forward, but I want do in this way (with ingress)
A lot of thanks
You need to specify tls parameters
like this
tls:
certResolver: "bar"
domains:
- main: "snitest.com"
sans:
- "*.snitest.com"
To avoid using TLS u need to match all routes HostSNI(`*`)
It is important to note that the Server Name Indication is an extension of the TLS protocol.
Hence, only TLS routers will be able to specify a domain name with that rule.
However, there is one special use case for HostSNI with non-TLS routers: when one wants a non-TLS router that matches all (non-TLS) requests, one should use the specific HostSNI(`*`) syntax.
DOCS
I have two services, say svcA and svcB that may sit in different namespaces or even in different k8s clusters. I want to configure the services so that svcA can refer to svcB using some constant address, then deploy an Istio Service Entry object depending on the environment to route the request. I will use Helm to do the deployment, so using a condition to choose the object to deploy is easy.
If svcB is in a completely different cluster, it is just like any external server and is easy to configure.
But when it is in a different namespace on the same cluster, I just could not get the Service Entry work. Maybe I don't understand all the options it provides.
Istio objects
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: demo-gateway
spec:
selector:
istio: ingressgateway # use Istio default gateway implementation
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1beta1
kind: ServiceEntry
metadata:
name: svcB-se
spec:
hosts:
- svcB.alias
ports:
- number: 80
name: http
protocol: HTTP2
location: MESH_INTERNAL
resolution: svcB.svcb-ns.svc.cluster.local
Update
After doing some random/crazy test, I found that the alias domain name must ends with well know suffix, like .com, .org, arbitrary suffix, like .svc, .alias, won't work.
If I update the above ServiceEntry object to like this. My application works.
apiVersion: networking.istio.io/v1beta1
kind: ServiceEntry
metadata:
name: svcB-se
spec:
hosts:
- svcB.com
ports:
- number: 80
name: http
protocol: HTTP2
location: MESH_INTERNAL
resolution: svcB.svcb-ns.svc.cluster.local
I searched for a while and checked the Istio documentations, but could not find any reference about domain name suffix restrictions.
Is it implicit knowledge that only domain names like .com and .org are valid? I have left school for a long time.
I have posted community wiki answer to summarize the topic and paste explanation of the problem:
After doing some random/crazy test, I found that the alias domain name must ends with well know suffix, like .com, .org, arbitrary suffix, like .svc, .alias, won't work.
If I update the above ServiceEntry object to like this. My application works.
apiVersion: networking.istio.io/v1beta1
kind: ServiceEntry
metadata:
name: svcB-se
spec:
hosts:
- svcB.com
ports:
- number: 80
name: http
protocol: HTTP2
location: MESH_INTERNAL
resolution: svcB.svcb-ns.svc.cluster.local
I searched for a while and checked the Istio documentations, but could not find any reference about domain name suffix restrictions.
Is it implicit knowledge that only domain names like .com and .org are valid? I have left school for a long time.
Explanation:
You can find ServiceEntry requirements in the offical documentation. You can find description how you can properly set this value:
The hosts associated with the ServiceEntry. Could be a DNS name with wildcard prefix.
The hosts field is used to select matching hosts in VirtualServices and DestinationRules.
For HTTP traffic the HTTP Host/Authority header will be matched against the hosts field.
For HTTPs or TLS traffic containing Server Name Indication (SNI), the SNI value will be matched against the hosts field.
NOTE 1: When resolution is set to type DNS and no endpoints are specified, the host field will be used as the DNS name of the endpoint to route traffic to.
NOTE 2: If the hostname matches with the name of a service from another service registry such as Kubernetes that also supplies its own set of endpoints, the ServiceEntry will be treated as a decorator of the existing Kubernetes service. Properties in the service entry will be added to the Kubernetes service if applicable. Currently, the only the following additional properties will be considered by istiod:
subjectAltNames: In addition to verifying the SANs of the service accounts associated with the pods of the service, the SANs specified here will also be verified.
Based on this issue you don't have to use FQDN in your hosts field, but you need to set proper value to select matching hosts in VirtualServices and DestinationRules.
I am watching a Pluralsight video on the Istio service mesh. One part of the presentation says this:
The VirtualService uses the Kubernetes service to find the IP addresses of all the pods. The VirtualService doesn't route any traffic through the [Kubernetes] service, but it just uses it to get the list of endpoints where the traffic could go.
And it shows this graphic (to show the pod discovery, not for traffic routing):
I am a bit confused by this because I don't know how an Istio VirtualService knows which Kubernetes Service to look at. I don't see any reference in the example Istio VirtualService yaml files to a Kubernetes Service.
I have theorized that the DestinationRules could have enough labels on them to get down to just the needed pods, but the examples only use the labels v1 and v2. It seems unlikely that a version alone will give only the needed pods. (Many different Services could be on v1 or v2.)
How does an Istio VirtualService know which Kubernetes Service to associate to?
or said another way,
How does an Istio VirtualService know how to find the correct pods from all the pods in the cluster?
When creating a VitualService you define which service to find in route.destination section
port : service running on port
host : name of the service
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: test
spec:
hosts:
- "example.com"
gateways:
- test-gateway
http:
- match:
- uri:
prefix: /
route:
- destination:
port:
number: 80
host: app-service
so,
app-pod/s -> (managed by) app-service -> test virtual service
Arfat's answer is correct.
I want to add the following part from the docs about the host, which should make things even more clear.
https://istio.io/latest/docs/reference/config/networking/virtual-service/#VirtualService
[...] Note for Kubernetes users: When short names are used (e.g. “reviews” instead of “reviews.default.svc.cluster.local”), Istio will interpret the short name based on the namespace of the rule, not the service. A rule in the “default” namespace containing a host “reviews” will be interpreted as “reviews.default.svc.cluster.local”, irrespective of the actual namespace associated with the reviews service. To avoid potential misconfigurations, it is recommended to always use fully qualified domain names over short names.
So when you write host: app-service and the VirtualService is in the default namespace, the host is interpreted as app-service.default.svc.cluster.local, which is the FQDN of the kubernetes service. If the app-service is in another namespace, say dev, you need to set the host as host: app-service.dev.svc.cluster.local.
Same goes for DestinationRule, where the FQDN of a kubernetes service is defined as host, as well.
https://istio.io/latest/docs/reference/config/networking/destination-rule/#DestinationRule
VirtualService and DestinationRule are configured for a host. The VirtualService defines where the traffic should go (eg host, weights for different versions, ...) and the DestinationRule defines, how the traffic should be handled, (eg load balancing algorithm and how are the versions defined.
So traffic is not routed like this
Gateway -> VirtualService -> DestinationRule -> Service -> Pod, but like
Gateway -> Service, considering the config from VirtualService and DestinationRule.
The environment: I have a kubernetes cluster set up with namespaces for "dev", "sit" and "prod". In each of these namespaces i have multiple services of type:LoadBalancer which target a specific deployment of a dockerised application (i have multiple applications) so i can access each of these by just using the exposed ip address of the service of whichever namespace i want. Example service looks like this an is very simple:
apiVersion: v1
kind: Service
metadata:
name: application1
spec:
ports:
- port: 80
targetPort: 3000
protocol: TCP
name: http
type: LoadBalancer
selector:
app: application1
The problem: I now want to be able to support multiple versions of all applications (ip:/v1/, ip:/v2/ etc) as to allow the users to migrate to the new version when they are ready and i've been trying to implement path-based routing following this guide. I have managed to restructure my architecture so that i have ReplicationControllers and an ingress which looks at the rules of the path to route to the correct service.
This seems to work if i'd only have one exposed service and a single namespace because i only have DNS host names for production environment and want to use the individual ip address of a service for other environments and i can't figure out how to specify the ingress rules for a service which doesn't have a hostname.
I could just have a loadbalancer for every environment and use path based routing to route to each different services for dev and sit which is not ideal because to access any service we'd have to now use something like this ip/application1 and ip/application2 instead of directly using the service ip address of each application. But my biggest problem is that when i followed the guide and created the ingress, replicationController and a service in my SIT namespace it started affecting the loadbalancer services in my other two environments (as i understand the kubernetes would sometimes try to use the nginx controller from SIT environment on my DEV services and therefore would fail, other times it would use the GCE default configuration and would work).
I tried adding the arg "- --watch-namespace=sit" to limit the scope of the ingress controller to only affect sit but it does not seem to work.
I now want to be able to support multiple versions of all applications (ip:/v1/, ip:/v2/ etc.)
That is exactly what Ingress can do, but the problem is that you want to use IP addresses for routing, but Ingress is using DNS names for that.
I think the best way to implement this is to use an Ingress which will handle requests. On GCE Ingress uses the HTTP(S) load balancer. Yes, you will need a DNS name for that, but it will help you to create a routing which you need.
Also, I highly recommend using TLS encryption for connections.
You can check LetsEncrypt to get a free SSL certificate.
So, the solution should like below:
1. Deploy your Services with type "ClusterIP" instead of "LoadBalancer". You can have more than one Service object for an application so you can do it in parallel with your current configuration.
2. Select any namespace (even special one), for instance - "ingress-ns". We need to create there Service objects which will point to your services in other namespaces. Here is an example of a service (let new DNS name be "my.shiny.new.domain"):
kind: Service
apiVersion: v1
metadata:
name: service-v1
namespace: ingress-ns
spec:
type: ExternalName
externalName: <service>.<namespace>.svc.cluster.local # here is a service name and namespace of your service with version v1.
ports:
- port: 80
3. Now, we have a namespace with several services which are pointing to different versions of your application in different namespaces. Now, we can create an Ingress object which will create an HTTP(S) Load Balancer on GCE with path-based routing:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test
namespace: ingress-ns
spec:
rules:
- host: my.shiny.new.domain
http:
paths:
- path: /v1
backend:
serviceName: service-v1
servicePort: 80
- path: /v2
backend:
serviceName: service-v2
servicePort: 80
Kubernetes will create a new HTTP(S) balancer with rules you set up in an Ingress object, and you will have an entry point with cross-namespaces path-based routing, and you don't have to use multiple IP addresses for that.
Actually, you can also manage by that ingress your primary version of an application and use your primary domain with "/" path to handle requests to your production version.