I have some services defined in my traefik config file like so
services:
serviceA:
loadBalancer:
servers:
- url: http://serviceA:8080
serviceB:
loadBalancer:
servers:
- url: http://serviceB:8080
( more services here...)
Services are in docker containers. I want a certain endpoint in serviceB to be only accessible internally.
http:
routers:
to-admin:
rule: "Host(`{{env "MYHOST"}}`) && PathPrefix(`/serviceB/criticalEndpoint`)"
service: serviceB
middlewares:
- ?
I saw there's a middleware for IP whitelisting, but what IP could I use so all external access to this endpoint is forbidden while the rest of endpoints on the service are public?
I believe what you are looking for is IPWhiteList middleware which you can attach to your service, so it will intercept every request to that service and allow/deny based on the client IP address.
So if you want the service to be exposed internally, you can give the CIDR range of your VPC which will include all possible internal IP addresses.
Docker example
# Accepts request from defined IP
labels:
- "traefik.http.middlewares.test-ipwhitelist.ipwhitelist.sourcerange=127.0.0.1/32, 192.168.1.7"
Kubernetes example
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
name: test-ipwhitelist
spec:
ipWhiteList:
sourceRange:
- 127.0.0.1/32
- 192.168.1.7
sourceRange : The sourceRange option sets the allowed IPs (or ranges of allowed IPs by using CIDR notation).
ipStrategy : The ipStrategy option defines two parameters that set how Traefik determines the client IP: depth, and excludedIPs.
ipStrategy.depth: The depth option tells Traefik to use the X-Forwarded-For header and take the IP located at the depth position (starting from the right).
If depth is greater than the total number of IPs in X-Forwarded-For, then the client IP will be empty.
depth is ignored if its value is less than or equal to 0.
Consider the following screenshot
Reference Traefik Docs
Related
What I have:
I have difficulty setting up an Ingress with Helm Chart on the cloud.
I have a project with a Front, a Back and a MySQL database.
I setup two Ingress, one for my BackEnd and one for my FrontEnd, I can access it with an IP given by Google Cloud Platform.
In the FrontEnd and BackEnd charts values.yaml:
...
service:
type: LoadBalancer
port: 8000 # 4200 for the FrontEnd
targetPort: 8000 # 4200 for FrontEnd
ingress:
enabled: true
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/enable-cors: "true"
nginx.ingress.kubernetes.io/cors-allow-origin: "*"
nginx.ingress.kubernetes.io/cors-allow-methods: "PUT, GET, POST, OPTIONS, DELETE"
nginx.ingress.kubernetes.io/cors-allow-headers: "DNT,X-CustomHeader,X-LANG,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-ConSSH / 51970trol,Content-Type,X-Api-Key,X-Device-Id,Access-Control-Allow-Origin"
hosts:
- paths:
- path: /
pathType: ImplementationSpecific
...
My Issue:
The FrontEnd needs to talk to the BackEnd throughout the Ingress.
In the FrontEnd values.yaml, I need to have a value:
BACKEND_URL: XXX.XXX.XXX.XXX:8000
But I don't know the URL of the BackEnd Ingress, or at least, until I deploy the back.
How can I variabilize it, to retrieve the URL ingress of the BackEnd?
Or at least, how can I find the ingress IP? (I've tried kubectl get ingress, it doesn't show the address).
You have two options:
Don't use the ingress but the service DNS name. This way your traffic doesn't even leave the cluster. If your backend service is called api and deployed in the backend namespace you can reach it internally using api.backend. https://kubernetes.io/docs/concepts/services-networking/service/#dns has details about the mechanism.
You reserve the IP on GCP side and pass the IP as a parameter to your helm charts. If you don't, each deletion and recreation of the service will end up on a different IP by GCP. Clients who have a cached DNS response will not be able to use your service until it has expired.
For GCP this snippet from the documentation is correct.
Some cloud providers allow you to specify the loadBalancerIP. In those cases, the load-balancer is created with the user-specified loadBalancerIP. If the loadBalancerIP field is not specified, the loadBalancer is set up with an ephemeral IP address. If you specify a loadBalancerIP but your cloud provider does not support the feature, the loadbalancerIP field that you set is ignored.
So get a permanent IP and pass it as loadBalancerIP to the service.
service:
spec:
type: LoadBalancer
port: 8000 # 4200 for the FrontEnd
targetPort: 8000 # 4200 for FrontEnd
loadBalancerIP: <the Global or Regional IP you got from GCP (depends on the LB)>
I want to access external DB which is exposed on some ip: 10.48.100.124 (there is no DNS name associated with this IP) with port 3306
I have create ServiceEntry:
apiVersion: networking.istio.io/v1beta1
kind: ServiceEntry
metadata:
name: csd-database
namespace: testnam-dev
spec:
hosts:
- csd-database
addresses:
- 10.48.100.124/32
exportTo:
- "."
ports:
- number: 3306
name: tcp
protocol: TCP
location: MESH_EXTERNAL
resolution: STATIC
endpoints:
- address: 10.48.100.124
ports:
tcp: 3306
And it works ok if I try to connect via IP (10.48.100.124) inside cluster.
But I want to expose this service (inside k8s/isito cluster) with DNS name so I have create VirtualService:
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: csd-database
namespace: testnam-dev
spec:
hosts:
- csd-database
gateways:
- ingresgateway
tcp:
- route:
- destination:
host: csd-database
But I'm not able to connect to host: csd-database
Also telnet is unable to connect to csd-database on 3306 port.
How I can expose ServiceEntry with DNS name inside cluster ?
DB dosn't have DNS name (externaly) it has only IP address. SO DB is accesible only on 10.48.100.124:3306
TLDR: Your ServiceEntry currently is configured to resolve by static ip address.
Change:
resolution: STATIC
to
resolution: DNS
According to istio documentation:
ServiceEntry.Resolution
Resolution determines how the proxy will resolve the IP addresses of
the network endpoints associated with the service, so that it can
route to one of them. The resolution mode specified here has no impact
on how the application resolves the IP address associated with the
service. The application may still have to use DNS to resolve the
service to an IP so that the outbound traffic can be captured by the
Proxy. Alternatively, for HTTP services, the application could
directly communicate with the proxy (e.g., by setting HTTP_PROXY) to
talk to these services.
NONE - Assume that incoming connections have already been resolved
(to a specific destination IP address). Such connections are typically
routed via the proxy using mechanisms such as IP table REDIRECT/ eBPF.
After performing any routing related transformations, the proxy will
forward the connection to the IP address to which the connection was
bound.
STATIC - Use the static IP addresses specified in endpoints (see
below) as the backing instances associated with the service.
DNS - Attempt to resolve the IP address by querying the ambient DNS,
during request processing. If no endpoints are specified, the proxy
will resolve the DNS address specified in the hosts field, if
wildcards are not used. If endpoints are specified, the DNS addresses
specified in the endpoints will be resolved to determine the
destination IP address. DNS resolution cannot be used with Unix domain
socket endpoints.
Using Gloo TCP Proxy to forward port 27017 for MongoDB access in a Kubernetes cluster.
The following Gateway spec works for forwarding all port 27017 traffic to the specified upstream.
spec:
bindAddress: '::'
bindPort: 27017
tcpGateway:
tcpHosts:
- destination:
single:
upstream:
name: default-mongodb-27017
namespace: gloo-system
name: one
useProxyProto: false
I would like to forward 27017 traffic based on hostname (for example, d.db.example.com points to the dev instance of Mongo and p.db.example.com points to the prod instance).
Is there a way to specify hostname (like in a virtual service route)?
(Note: This is for a educational simulation, and as such isn't a real "production" environment. This is why both the dev and prod instance will exist in the same Kubernetes cluster. This is also why a managed or external MongoDB solution isn't used)
As I mentioned in comments, as far as I know it´s not posibble to do in gateway,atleast I could not find anything about that in gateway documentation, but you can configure virtual services to make it work.
As mentioned in documentation there
The VirtualService is the root Routing object for the Gloo Gateway. A virtual service describes the set of routes to match for a set of domains.
It defines: - a set of domains - the root set of routes for those domains - an optional SSL configuration for server TLS Termination - VirtualHostOptions that will apply configuration to all routes that live on the VirtualService.
Domains must be unique across all virtual services within a gateway (i.e. no overlap between sets).
And there
Virtual Services define a set of route rules, an optional SNI configuration for a given domain or set of domains.
Gloo will select the appropriate virtual service (set of routes) based on the domain specified in a request’s Host header (in HTTP 1.1) or :authority header (HTTP 2.0).
Virtual Services support wildcard domains (starting with *).
Gloo will create a default virtual service for the user if the user does not provide one. The default virtual service matches the * domain, which will serve routes for any request that does not include a Host/:authority header, or a request that requests a domain that does not match another virtual service.
The each domain specified for a virtualservice must be unique across the set of all virtual services provided to Gloo.
Take a look at this tutorial.
And more specifically at this example where they use 2 domains, echo.example.com and foxtrot.example.com, in your case that would be d.db.example.com and p.db.example.com
Option 2: Separating ownership across domains
The first alternative we might consider is to model each service with different domains, so that the routes are managed on different objects. For example, if our primary domain was example.com, we could have a virtual service for each subdomain: echo.example.com and foxtrot.example.com.
apiVersion: gateway.solo.io/v1
kind: VirtualService
metadata:
name: echo
namespace: echo
spec:
virtualHost:
domains:
- 'echo.example.com'
routes:
- matchers:
- prefix: /echo
---
apiVersion: gateway.solo.io/v1
kind: VirtualService
metadata:
name: foxtrot
namespace: foxtrot
spec:
virtualHost:
domains:
- 'foxtrot.example.com'
...
I hope this helps.
I know a scenario of kubernetes headless service with selector.
But what’s the usage scenario of kubernetes headless service without selector?
Aliasing external services into the cluster DNS.
Services without selectors are used if you want to have an external database cluster in production, but in your test environment you use your own databases, to point your Service to a Service in a different Namespace or on another cluster, when you are migrating a workload to Kubernetes.
Service without selectors are often used to alias external services into the cluster DNS.
Here ia an example of service without selector:
apiVersion: v1
kind: Service
metadata:
name: example-service
spec:
ports:
- protocol: TCP
port: 80
targetPort: 9376
This Service has no selector, the corresponding Endpoint object is not created automatically. You can manually map the Service to the network address and port where it’s running, by adding an Endpoint object manually:
apiVersion: v1
kind: Endpoints
metadata:
name: example-service
subsets:
- addresses:
- ip: 192.0.2.42
ports:
- port: 9376
If you have more that one IP address for redundancy, you can repeat them in the addresses array. Once the endpoints are populated, the load balancer will start redirecting traffic from your Kubernetes service to the IP addresses,
Note: The endpoint IPs must not be: loopback (127.0.0.0/8 for IPv4, ::1/128 for IPv6), or link-local (169.254.0.0/16 and 224.0.0.0/24 for IPv4, fe80::/64 for IPv6). Endpoint IP addresses cannot be the cluster IPs of other Kubernetes Services, because kube-proxy doesn’t support virtual IPs as a destination.
You can access a Service without a selector the same as if it had a selector.
Take a look: services-without-selector, example-service-without-selector.
The IP whitelisting/blacklisting example explained here https://kubernetes.io/docs/tutorials/services/source-ip/ uses source.ip attribute. However, in kubernetes (kubernetes cluster running on docker-for-desktop) source.ip returns the IP of kube-proxy. A suggested workaround is to use request.headers["X-Real-IP"], however it doesn't seem to work and returns kube-proxy IP in docker-for-desktop in mac.
https://github.com/istio/istio/issues/7328 mentions this issue and states:
With a proxy that terminates the client connection and opens a new connection to your nodes/endpoints. In such cases the source IP will always be that of the cloud LB, not that of the client.
With a packet forwarder, such that requests from the client sent to the loadbalancer VIP end up at the node with the source IP of the client, not an intermediate proxy.
Loadbalancers in the first category must use an agreed upon protocol between the loadbalancer and backend to communicate the true client IP such as the HTTP X-FORWARDED-FOR header, or the proxy protocol.
Can someone please help how can we define a protocol to get the client IP from the loadbalancer?
Maybe your are confusing with kube-proxy and istio, by default Kubernetes uses kube-proxy but you can install istio that injects a new proxy per pod to control the traffic in both directions to the services inside the pod.
With that said you can install istio on your cluster and enable it for only the services you need and apply a blacklisting using the istio mechanisms
https://istio.io/docs/tasks/policy-enforcement/denial-and-list/
To make a blacklist using the source IP we have to leave istio manage how to fetch the source IP address and use som configuration like this taken from the docs:
apiVersion: config.istio.io/v1alpha2
kind: handler
metadata:
name: whitelistip
spec:
compiledAdapter: listchecker
params:
# providerUrl: ordinarily black and white lists are maintained
# externally and fetched asynchronously using the providerUrl.
overrides: ["10.57.0.0/16"] # overrides provide a static list
blacklist: false
entryType: IP_ADDRESSES
---
apiVersion: config.istio.io/v1alpha2
kind: instance
metadata:
name: sourceip
spec:
compiledTemplate: listentry
params:
value: source.ip | ip("0.0.0.0")
---
apiVersion: config.istio.io/v1alpha2
kind: rule
metadata:
name: checkip
spec:
match: source.labels["istio"] == "ingressgateway"
actions:
- handler: whitelistip
instances: [ sourceip ]
---
You can use the param providerURL to maintain an external list.
Also check to use externalTrafficPolicy: Local on the ingress-gateway servce of istio.
As per comments my last advice is to use a different ingress-controller to avoid the use of kube-proxy, my recomendation is to use the nginx-controller
https://github.com/kubernetes/ingress-nginx
You can configure this ingress as a regular nginx acting as a proxy