Istio Service Entry not working as expected on minikube - service

I have 2 minikube clusters: Cluster1 and Cluster2.
Cluster1 has Backend Service
Cluster2 has Info Service
Usecase: Backend service sends request to the Info service using the ingress gateway
Cluster2 has a ingress gateway and I used minikube tunnel command to get an external IP for the gateway. The IP was 127.0.0.1 and port is 8080. If I curl from the Backend pod to host.docker.internal on port 8080, I get the correct response from the Info service.
I have created a service entry for the external service(Info) in Cluster1.
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: info-ext
spec:
hosts:
- host.docker.internal
ports:
- number: 8080
name: http
protocol: HTTP
resolution: DNS
location: MESH_EXTERNAL
When I try to access http://host.docker.internal 1 from the backend application. It keeps sending the request on port 80.
* Rebuilt URL to: http://host.docker.internal/
* Trying 192.168.65.2...
* TCP_NODELAY set
* Connected to host.docker.internal (192.168.65.2) port 80 (#0)
> GET / HTTP/1.1
> Host: host.docker.internal
> User-Agent: curl/7.52.1
> Accept: */*
Can I get any leads about what I am missing and how can I access a external service directly.

Related

How to use an ExternalName service to access an internal service that is exposed with ingress

I am trying out a possible kubernetes scenario in the local machine minikube cluster. It is to access an internal service that is exposed with ingress in one cluster from another cluster using an ExternalName service. I understand that using an ingress the service will already be accessible within the cluster. As I am trying this out locally using minikube, I am unable to use simultaneously running clusters. Since I just wanted to verify whether it is possible to access an ingress exposed service using ExternName service.
I started the minikube tunnel using minikube tunnel.
I can access the service using http://k8s-yaml-hello.info.
But when I tryout curl k8s-yaml-hello-internal within a running POD, the error that I that is curl: (7) Failed to connect to k8s-yaml-hello-internal port 80 after 1161 ms: Connection refused
Can anyone point me out the issue here? Thanks in advance.
service.yaml
apiVersion: v1
kind: Service
metadata:
name: k8s-yaml-hello
spec:
selector:
app: k8s-yaml-hello
ports:
- port: 3000
targetPort: 3000
ingress.yaml
kind: Ingress
metadata:
name: k8s-yaml-hello-ingress
labels:
name: k8s-yaml-hello-ingress
spec:
rules:
- host: k8s-yaml-hello.info
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: k8s-yaml-hello
port:
number: 3000
externalName.yaml
apiVersion: v1
kind: Service
metadata:
name: k8s-yaml-hello-internal
spec:
ports:
- name: ''
appProtocol: http
protocol: TCP
port: 3000
type: ExternalName
externalName: k8s-yaml-hello.info
etc/hosts
127.0.0.1 k8s-yaml-hello.info
As You are getting the error curl: (7) Failed to connect :
The above error message means that no web-server is running on the specified IP and Port and the specified (or implied) port.
Check using nano /etc/hosts whether the IP and port is pointing to the correct domain or not. If it's not pointing, provide the correct IP and Port.
Refer to this SO for more information.
In Ingress.Yaml use Port 80 and also in service.yaml port should be 80. The service port and Target port should be different As per your yaml it is the same. Change it to 80 and have a try , If you get any errors, post here.
The problem is that minikube tunnel by default binds to the localhost address 127.0.0.1. Every node, machine, vm, container etc. has its own and the same localhost address. It is to reach local services without having to know the ip address of the network interface (the service is running on "myself"). So when k8s-yaml-hello.info resolves to 127.0.0.1 then it points to different service depending on which container you are (just to myself).
To make it work like you want, you first have to find out the ip address of your hosts network interface e.g. with ifconfig. Its name is something like eth0 or en0, depending on your system.
Then you can use the bind-address option of minikube tunnel to bind to that address instead:
minikube tunnel --bind-address=192.168.1.10
With this your service should be reachable from within the container. Please check first with the ip address:
curl http://192.168.1.10
Then make sure name resolution with /etc/hosts works in your container with dig, nslookup, getent hosts or something similar that is available in your container.

Real Client IP for TCP services - Nginx Ingress Controller

We have HTTP and TCP services behind Nginx Ingress Controller. The HTTP services are configured through an Ingress object, where when we get the request, through a snippet generate a header (Client-Id) and pass it to the service.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-pre
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/configuration-snippet: |
if ($host ~ ^(?<client>[^\..]+)\.(?<app>pre|presaas).(host1|host2).com$) {
more_set_input_headers 'Client-Id: $client';
}
spec:
tls:
- hosts:
- "*.pre.host1.com"
secretName: pre-host1
- hosts:
- "*.presaas.host2.com"
secretName: presaas-host2
rules:
- host: "*.pre.host1.com"
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: service-front
port:
number: 80
- host: "*.presaas.host2.com"
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: service-front
port:
number: 80
The TCP service is configured to connect directly, and it is done through a ConfigMap. These service connect through a TCP socket.
apiVersion: v1
data:
"12345": pre/service-back:12345
kind: ConfigMap
metadata:
name: tcp-service
namespace: ingress-nginx
All this config works fine. The TCP clients connect fine through a TCP sockets and the users connect fine through HTTP. The problem is that the TCP clients, when the establish the connection, get the source IP address (their own IP, or in Nginx, $remote_addr) and report it back to an admin endpoint, where it is shown in a dashboard. So there is a dashboard with all the TCP clients connected, with their IP addresses. Now what happens is that all the IP addresses, instead of being the client ones are the one of the Ingress Controller (the pod).
I set use-proxy-protocol: "true", and it seems to resolve the issue for the TCP connections, as in the logs I can see different external IP addresses being connected, but now HTTP services do not work, including the dashboard itself. These are the logs:
while reading PROXY protocol, client: 1.2.3.4, server: 0.0.0.0:80
2022/04/04 09:00:13 [error] 35#35: *5273 broken header: "��d�hԓ�:�����ӝp��E�L_"�����4�<����0�,�(�$��
����kjih9876�w�s��������" while reading PROXY protocol, client: 1.2.3.4, server: 0.0.0.0:443
I know the broken header logs are from HTTP services, as if I do telnet to the HTTP port I get the broken header, and if I telnet to the TCP port I get clean logs with what I expect.
I hope the issue is clear. So, what I need is a way to configure Nginx Ingress Controller to get HTTP and TCP servies. I don't know if I can configure use-proxy-protocol: "true" parameter for only one service. It seems that this is a global parameter.
For now the solution we are thinking of is to set a new Network Load Balancer (this is running in a AWS EKS cluster) just for the TCP service, and leave the HTTP behind the Ingress Controller.
To solve this issue, go to NLB target groups and enable the proxy protocol version 2 in the attributes tab. Network LB >> Listeners >> TCP80/TCP443 >> select Target Group >> Attribute Tab >> Enable Proxy Protocol v2.

HAProxy Not Working with Kubernetes NodePort for Backend (Bare Metal)

I have a host running HAProxy already. It's been up and running since before I did anything with Kubernetes. It works flawlessly as a reverse proxy and SSL terminator for various web things in various Docker containers on various other host machines.
Now I have a Kubernetes cluster up and running across some of those other machines. I've created the NodePort Service that exposes port 30080 on each worker node, as follows:
apiVersion: v1
kind: Service
metadata:
name: snginx
labels:
app: nginx
spec:
type: NodePort
externalTrafficPolicy: Local #Cluster or Local
selector:
app: nginx
ports:
- protocol: TCP
port: 8080
targetPort: 80
nodePort: 30080
From the machine running HAProxy (which is not part of the cluster), I can curl the NodePort successfully ( curl 10.0.0.20:30080 ), and I get "Welcome to nginx!..." However, if I set that NodePort as a backend in HAProxy, I get a 503 "No server is available", and HAProxy traffic log says:
localhost haproxy[18489]: [redactedIP]:49399 [30/Aug/2021:19:24:00.764] http-in~ load/load 66/0/-1/-1/67 503 212 - - SC-- 1/1/0/0/3 0/0 "GET / HTTP/1.1"
The haproxy admin log says:
Aug 30 20:07:13 localhost haproxy[18839]: Server load/load is DOWN, reason: Layer4 connection problem, info: "General socket error (Permission denied)"
However, I've disabled the firewall with
sudo systemctl disable --now firewalld
and verified the status is not running. Also, SELinux was disabled when I installed the cluster. Also, I can ping 10.0.0.20 just fine.
"load" is the hostname I'm using for testing load balancing (i.e. load.mydomain.com).
Also, if I use PAT on my physical router to route directly to that NodePort, from outside the building, it works as expected.
What gives? What's the difference between the proxied request and curl?
Thank you.
SELinux is the difference. That is, SELinux on the HAProxy host (not a cluster node):
"SELinux only allows the web server to make outbound connections to a limited set of ports"
That is, you can't make an outbound http request to any port in the NodePort range (30000-32768) without opening that port on the "client", which is the HAProxy server in this case.
sudo semanage port --add --type http_port_t --proto tcp 30080

kubernetes expose services with Traefik 2.x as ingress with CRD

What do i have working
I have a Kubernetes cluster as follow:
Single control plane (but plan to extend to 3 control plane for HA)
2 worker nodes
On this cluster i deployed (following this doc from traefik https://docs.traefik.io/user-guides/crd-acme/):
A deployment that create two pods :
traefik itself: which will be in charge of routing with exposed port 80, 8080
whoami:a simple http server thats responds to http requests
two services
traefik service:
whoami servic:
One traefik IngressRoute:
What do i want
I have multiple services running in the cluster and i want to expose them to the outside using Ingress.
More precisely i want to use the new Traefik 2.x CDR ingress methods.
My ultimate goal is to use new traefiks 2.x CRD to expose resources on port 80, 443, 8080 using IngressRoute Custom resource definitions
What's the problem
If i understand well, classic Ingress controllers allow exposition of every ports we want to the outside world (including 80, 8080 and 443).
But with the new traefik CDR ingress approach on it's own it does not exports anything at all.
One solution is to define the Traefik service as a loadbalancer typed service and then expose some ports. But you are forced to use the 30000-32767 ports range (same as nodeport), and i don't want to add a reverse proxy in front of the reverse proxy to be able to expose port 80 and 443...
Also i've seed from the doc of the new igress CRD (https://docs.traefik.io/user-guides/crd-acme/) that:
kubectl port-forward --address 0.0.0.0 service/traefik 8000:8000 8080:8080 443:4443 -n default
is required, and i understand that now. You need to map the host port to the service port.
But mapping the ports that way feels clunky and counter intuitive. I don't want to have a part of the service description in a yaml and at the same time have to remember that i need to map port with kubectl.
I'm pretty sure there is a neat and simple solution to this problem, but i can't understand how to keep things simple. Do you guys have an experience in kubernetes with the new traefik 2.x CRD config?
apiVersion: v1
kind: Service
metadata:
name: traefik
spec:
ports:
- protocol: TCP
name: web
port: 80
targetPort: 8000
- protocol: TCP
name: admin
port: 8080
targetPort: 8080
- protocol: TCP
name: websecure
port: 443
targetPort: 4443
selector:
app: traefik
have you tried to use tragetPort where every request comes on 80 redirect to 8000 but when you use port-forward you need to always use service instead of pod
You can try to use LoadBalancer service type for expose the Traefik service on ports 80, 443 and 8080. I've tested the yaml from the link you provided in GKE, and it's works.
You need to change the ports on 'traefik' service and add a 'LoadBalancer' as service type:
kind: Service
metadata:
name: traefik
spec:
ports:
- protocol: TCP
name: web
port: 80 <== Port to receive HTTP connections
- protocol: TCP
name: admin
port: 8080 <== Administration port
- protocol: TCP
name: websecure
port: 443 <== Port to receive HTTPS connections
selector:
app: traefik
type: LoadBalancer <== Define the type load balancer
Kubernetes will create a Loadbalancer for you service and you can access your application using ports 80 and 443.
$ curl https://35.111.XXX.XX/tls -k
Hostname: whoami-5df4df6ff5-xwflt
IP: 127.0.0.1
IP: 10.60.1.11
RemoteAddr: 10.60.1.13:55262
GET /tls HTTP/1.1
Host: 35.111.XXX.XX
User-Agent: curl/7.66.0
Accept: */*
Accept-Encoding: gzip
X-Forwarded-For: 10.60.1.1
X-Forwarded-Host: 35.111.XXX.XX
X-Forwarded-Port: 443
X-Forwarded-Proto: https
X-Forwarded-Server: traefik-66dd84c65c-4c5gp
X-Real-Ip: 10.60.1.1
$ curl http://35.111.XXX.XX/notls
Hostname: whoami-5df4df6ff5-xwflt
IP: 127.0.0.1
IP: 10.60.1.11
RemoteAddr: 10.60.1.13:55262
GET /notls HTTP/1.1
Host: 35.111.XXX.XX
User-Agent: curl/7.66.0
Accept: */*
Accept-Encoding: gzip
X-Forwarded-For: 10.60.1.1
X-Forwarded-Host: 35.111.XXX.XX
X-Forwarded-Port: 80
X-Forwarded-Proto: http
X-Forwarded-Server: traefik-66dd84c65c-4c5gp
X-Real-Ip: 10.60.1.1
Well after some time i've decided to put an haproxy in front of the kubernetes Cluster. It's seems to be the only solution ATM.

How to access kubernetes service externally on bare metal install

Trying to make a bare metal k8s cluster to provide some services and need to be able to provide them on tcp port 80 and udp port 69 (accessible from outside the k8s cluster.) I've set the cluster up using kubeadm and it's running ubuntu 16.04. How do I access the services externally? I've been trying to use load-balancers and ingress but am having no luck since I'm not using an external load balancer (Local rather than AWS etc.)
An example of what I'm trying to do can be found here but it's using GCE.
Thanks
Service with NodePort
Create a service with type NodePort, Service can be listening TCP/UDP port 30000-32767 on every node. By default, you can not simply choose to expose a Service on port 80 on your nodes.
kind: Service
apiVersion: v1
metadata:
name: my-service
spec:
selector:
app: MyApp
ports:
- protocol: TCP
port: {SERVICE_PORT}
targetPort: {POD_PORT}
nodePort: 31000
- portocol: UDP
port: {SERVICE_PORT}
targetPort: {POD_PORT}
nodePort: 32000
type: NodePort
The container image gcr.io/google_containers/proxy-to-service:v2 is a very small container that will do port-forwarding for you. You can use it to forward a pod port or a host port to a service. Pods can choose any port or host port, and are not limited in the same way Services are.
apiVersion: v1
kind: Pod
metadata:
name: dns-proxy
spec:
containers:
- name: proxy-udp
image: gcr.io/google_containers/proxy-to-service:v2
args: [ "udp", "53", "kube-dns.default", "1" ]
ports:
- name: udp
protocol: UDP
containerPort: 53
hostPort: 53
- name: proxy-tcp
image: gcr.io/google_containers/proxy-to-service:v2
args: [ "tcp", "53", "kube-dns.default" ]
ports:
- name: tcp
protocol: TCP
containerPort: 53
hostPort: 53
Ingress
If there are multiple services sharing same TCP port with different hosts/paths, deploy the NGINX Ingress Controller, which listening on HTTP 80 and HTTPS 443.
Create an ingress, forward the traffic to specified services.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-ingress
annotations:
ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /testpath
backend:
serviceName: test
servicePort: 80
If I was going to do this on my home network I'd do it like this:
Configure 2 port forwarding rules on my router, to redirect traffic to an nginx box acting as a L4 load balancer.
So if my router IP was 1.2.3.4 and my custom L4 nginx LB was 192.168.1.200
Then I'd tell my router to port forward:
1.2.3.4:80 --> 192.168.1.200:80
1.2.3.4:443 --> 192.168.1.200:443
I'd follow this https://kubernetes.github.io/ingress-nginx/deploy/
and deploy most of what's in the generic cloud ingress controller (That should create an ingress controller pod, an L7 Nginx LB deployment and service in the cluster, and expose it on nodeports so you'd have a NodePort 32080 and 32443 (note they would actually be random, but this is easier to follow)) (Since you're working on bare metal I don't believe it'd be able to auto spawn and configure the L4 load balancer for you.)
I'd then Manually configure the L4 load balancer to load balance traffic coming in on port 80 ---> NodePort 32080
port 443 ---> NodePort 32443
So betweeen that big picture of what do do and the following link you should be good. https://kubernetes.github.io/ingress-nginx/deploy/baremetal/
(Btw this will let you continue to configure your ingress with the ingress controller)
Note: I plan to setup a bare metal cluster in my home closet in a few months so let me know how it goes!
If you have just one node deploy the ingress controller as a daemonset with host port 80. Do not deploy a service for it
If you have multiple nodes; with cloud providers a load balancer is a construct outside the cluster that's basically an HA proxy to each node running pods of your service on some port(s). You could do this kind of thing manually, for any service you want to expose set type to NodePort with some port in the allowed range (somewhere in the 30k) and spinup another VM with a TCP balancer (such as nginx) to all your nodes on that port. You'll be limited to running as many pods as you have nodes for that service