Istio - access external DB (TCP) with DNS name - kubernetes

I want to access external DB which is exposed on some ip: 10.48.100.124 (there is no DNS name associated with this IP) with port 3306
I have create ServiceEntry:
apiVersion: networking.istio.io/v1beta1
kind: ServiceEntry
metadata:
name: csd-database
namespace: testnam-dev
spec:
hosts:
- csd-database
addresses:
- 10.48.100.124/32
exportTo:
- "."
ports:
- number: 3306
name: tcp
protocol: TCP
location: MESH_EXTERNAL
resolution: STATIC
endpoints:
- address: 10.48.100.124
ports:
tcp: 3306
And it works ok if I try to connect via IP (10.48.100.124) inside cluster.
But I want to expose this service (inside k8s/isito cluster) with DNS name so I have create VirtualService:
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: csd-database
namespace: testnam-dev
spec:
hosts:
- csd-database
gateways:
- ingresgateway
tcp:
- route:
- destination:
host: csd-database
But I'm not able to connect to host: csd-database
Also telnet is unable to connect to csd-database on 3306 port.
How I can expose ServiceEntry with DNS name inside cluster ?
DB dosn't have DNS name (externaly) it has only IP address. SO DB is accesible only on 10.48.100.124:3306

TLDR: Your ServiceEntry currently is configured to resolve by static ip address.
Change:
resolution: STATIC
to
resolution: DNS
According to istio documentation:
ServiceEntry.Resolution
Resolution determines how the proxy will resolve the IP addresses of
the network endpoints associated with the service, so that it can
route to one of them. The resolution mode specified here has no impact
on how the application resolves the IP address associated with the
service. The application may still have to use DNS to resolve the
service to an IP so that the outbound traffic can be captured by the
Proxy. Alternatively, for HTTP services, the application could
directly communicate with the proxy (e.g., by setting HTTP_PROXY) to
talk to these services.
NONE - Assume that incoming connections have already been resolved
(to a specific destination IP address). Such connections are typically
routed via the proxy using mechanisms such as IP table REDIRECT/ eBPF.
After performing any routing related transformations, the proxy will
forward the connection to the IP address to which the connection was
bound.
STATIC - Use the static IP addresses specified in endpoints (see
below) as the backing instances associated with the service.
DNS - Attempt to resolve the IP address by querying the ambient DNS,
during request processing. If no endpoints are specified, the proxy
will resolve the DNS address specified in the hosts field, if
wildcards are not used. If endpoints are specified, the DNS addresses
specified in the endpoints will be resolved to determine the
destination IP address. DNS resolution cannot be used with Unix domain
socket endpoints.

Related

How to use an ExternalName service to access an internal service that is exposed with ingress

I am trying out a possible kubernetes scenario in the local machine minikube cluster. It is to access an internal service that is exposed with ingress in one cluster from another cluster using an ExternalName service. I understand that using an ingress the service will already be accessible within the cluster. As I am trying this out locally using minikube, I am unable to use simultaneously running clusters. Since I just wanted to verify whether it is possible to access an ingress exposed service using ExternName service.
I started the minikube tunnel using minikube tunnel.
I can access the service using http://k8s-yaml-hello.info.
But when I tryout curl k8s-yaml-hello-internal within a running POD, the error that I that is curl: (7) Failed to connect to k8s-yaml-hello-internal port 80 after 1161 ms: Connection refused
Can anyone point me out the issue here? Thanks in advance.
service.yaml
apiVersion: v1
kind: Service
metadata:
name: k8s-yaml-hello
spec:
selector:
app: k8s-yaml-hello
ports:
- port: 3000
targetPort: 3000
ingress.yaml
kind: Ingress
metadata:
name: k8s-yaml-hello-ingress
labels:
name: k8s-yaml-hello-ingress
spec:
rules:
- host: k8s-yaml-hello.info
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: k8s-yaml-hello
port:
number: 3000
externalName.yaml
apiVersion: v1
kind: Service
metadata:
name: k8s-yaml-hello-internal
spec:
ports:
- name: ''
appProtocol: http
protocol: TCP
port: 3000
type: ExternalName
externalName: k8s-yaml-hello.info
etc/hosts
127.0.0.1 k8s-yaml-hello.info
As You are getting the error curl: (7) Failed to connect :
The above error message means that no web-server is running on the specified IP and Port and the specified (or implied) port.
Check using nano /etc/hosts whether the IP and port is pointing to the correct domain or not. If it's not pointing, provide the correct IP and Port.
Refer to this SO for more information.
In Ingress.Yaml use Port 80 and also in service.yaml port should be 80. The service port and Target port should be different As per your yaml it is the same. Change it to 80 and have a try , If you get any errors, post here.
The problem is that minikube tunnel by default binds to the localhost address 127.0.0.1. Every node, machine, vm, container etc. has its own and the same localhost address. It is to reach local services without having to know the ip address of the network interface (the service is running on "myself"). So when k8s-yaml-hello.info resolves to 127.0.0.1 then it points to different service depending on which container you are (just to myself).
To make it work like you want, you first have to find out the ip address of your hosts network interface e.g. with ifconfig. Its name is something like eth0 or en0, depending on your system.
Then you can use the bind-address option of minikube tunnel to bind to that address instead:
minikube tunnel --bind-address=192.168.1.10
With this your service should be reachable from within the container. Please check first with the ip address:
curl http://192.168.1.10
Then make sure name resolution with /etc/hosts works in your container with dig, nslookup, getent hosts or something similar that is available in your container.

Allow a certain endpoint only internally in Traefik

I have some services defined in my traefik config file like so
services:
serviceA:
loadBalancer:
servers:
- url: http://serviceA:8080
serviceB:
loadBalancer:
servers:
- url: http://serviceB:8080
( more services here...)
Services are in docker containers. I want a certain endpoint in serviceB to be only accessible internally.
http:
routers:
to-admin:
rule: "Host(`{{env "MYHOST"}}`) && PathPrefix(`/serviceB/criticalEndpoint`)"
service: serviceB
middlewares:
- ?
I saw there's a middleware for IP whitelisting, but what IP could I use so all external access to this endpoint is forbidden while the rest of endpoints on the service are public?
I believe what you are looking for is IPWhiteList middleware which you can attach to your service, so it will intercept every request to that service and allow/deny based on the client IP address.
So if you want the service to be exposed internally, you can give the CIDR range of your VPC which will include all possible internal IP addresses.
Docker example
# Accepts request from defined IP
labels:
- "traefik.http.middlewares.test-ipwhitelist.ipwhitelist.sourcerange=127.0.0.1/32, 192.168.1.7"
Kubernetes example
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
name: test-ipwhitelist
spec:
ipWhiteList:
sourceRange:
- 127.0.0.1/32
- 192.168.1.7
sourceRange : The sourceRange option sets the allowed IPs (or ranges of allowed IPs by using CIDR notation).
ipStrategy : The ipStrategy option defines two parameters that set how Traefik determines the client IP: depth, and excludedIPs.
ipStrategy.depth: The depth option tells Traefik to use the X-Forwarded-For header and take the IP located at the depth position (starting from the right).
If depth is greater than the total number of IPs in X-Forwarded-For, then the client IP will be empty.
depth is ignored if its value is less than or equal to 0.
Consider the following screenshot
Reference Traefik Docs

Why can't k3s access host services that listening loopback address?

I deployed k3s on a single Ubuntu machine.
Other services are installed on this machine directly (outside k8s), e.g. Redis, Mysql... They are listening loopback address 127.0.0.1 for the security reason.
But the service inside k3s cannot connect to my db. If I change the listening address to 0.0.0.0, the problem will be fixed.
Why? And what is the best practice in this use case?
PS: I use Endpoints to map host service to k8s:
apiVersion: v1
kind: Service
metadata:
name: redis
spec:
ports:
- port: 6379
---
apiVersion: v1
kind: Endpoints
metadata:
name: redis
subsets:
- addresses:
- ip: xxxxx (host's ip)
ports:
- port: 6379
Thanks to #vincent pli, I realized that I confused lo with host itself.
A service listens to lo, does not mean that all services actually running on this machine can access it. If that's what you really want, you must make sure that these two services are in the same virtual network card (lo).
Otherwise, if you want to access through an ip address, the service must listen it. If this address is only inside the LAN, it's still safe. Or use the firewall to enforce more stringent inbounds restrictions.

What’s the usage scenario of kubernetes headless service without selector?

I know a scenario of kubernetes headless service with selector.
But what’s the usage scenario of kubernetes headless service without selector?
Aliasing external services into the cluster DNS.
Services without selectors are used if you want to have an external database cluster in production, but in your test environment you use your own databases, to point your Service to a Service in a different Namespace or on another cluster, when you are migrating a workload to Kubernetes.
Service without selectors are often used to alias external services into the cluster DNS.
Here ia an example of service without selector:
apiVersion: v1
kind: Service
metadata:
name: example-service
spec:
ports:
- protocol: TCP
port: 80
targetPort: 9376
This Service has no selector, the corresponding Endpoint object is not created automatically. You can manually map the Service to the network address and port where it’s running, by adding an Endpoint object manually:
apiVersion: v1
kind: Endpoints
metadata:
name: example-service
subsets:
- addresses:
- ip: 192.0.2.42
ports:
- port: 9376
If you have more that one IP address for redundancy, you can repeat them in the addresses array. Once the endpoints are populated, the load balancer will start redirecting traffic from your Kubernetes service to the IP addresses,
Note: The endpoint IPs must not be: loopback (127.0.0.0/8 for IPv4, ::1/128 for IPv6), or link-local (169.254.0.0/16 and 224.0.0.0/24 for IPv4, fe80::/64 for IPv6). Endpoint IP addresses cannot be the cluster IPs of other Kubernetes Services, because kube-proxy doesn’t support virtual IPs as a destination.
You can access a Service without a selector the same as if it had a selector.
Take a look: services-without-selector, example-service-without-selector.

Assign an External IP to a Node

I'm running a bare metal Kubernetes cluster and trying to use a Load Balancer to expose my services. I know typically that the Load Balancer is a function of the underlying public cloud, but with recent support for Ingress Controllers it seems like it should now be possible to use nginx as a self-hosted load balancer.
So far, i've been following the example here to set up an nginx Ingress Controller and some test services behind it. However, I am unable to follow Step 6 which displays the external IP for the node that the load balancer is running on as my node does not have an ExternalIP in the addresses section, only a LegacyHostIP and InternalIP.
I've tried manually assigning an ExternalIP to my cluster by specifying it in the service's specification. However, this appears to be mapped as the externalID instead.
How can I manually set my node's ExternalIP address?
This is something that is tested and works for an nginx service created on a particular node.
apiVersion: v1
kind: Service
metadata:
name: nginx
namespace: default
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
name: http
- port: 443
protocol: TCP
targetPort: 443
name: https
externalIPs:
- '{{external_ip}}'
selector:
app: nginx
Assumes an nginx deployment upstream listening on port 80, 443.
The externalIP is the public IP of the node.
I would suggest checking out MetalLB: https://github.com/google/metallb
It allows for externalIP addresses in a baremetal cluster using either ARP or BGP. It has worked great for us and allows you to simply request a LoadBalancer service like you would in the cloud.