Could two cluster IP services be connected in Kubernetes? - kubernetes

The situation is that I want to connect two cluster IP services that are inside a tenant which already has Traefik as NodePort so that any of these two services can be a LoadBalancer because the NodePort is used by Traefik.
The two services I am trying to connect work as follows. The first one, which I called "Master", will receive a post from the client with a text and will call the other service, called "slave", which will add some text ("Hola Patri") to the text sent by the client. The two services are flask services defined by the app.py in the Docker image. You can see the app.py for both images below:
master/app.py
from flask import Flask, request
import requests
app = Flask(__name__)
#app.route("/", methods = ['GET', 'POST'])
def put():
if request.method == 'POST':
text = request.get_data()
r = requests.post("http://slave:5001",data=text)
result = r.text
return result
if __name__ == '__main__':
app.run(host="0.0.0.0", port=5000, debug=True)
slave/app.py
from flask import Flask, request
app = Flask(__name__)
#app.route("/", methods = ['GET', 'POST'])
def put():
if request.method == 'POST':
text = request.get_data()
#text = request.data
texto_final = str(text) + 'Hola Patri'
return texto_final
if __name__ == '__main__':
app.run(host="0.0.0.0", port=5001, debug=True)
The configuration of the deployments and the services are defined in two yamls: master_src.yaml and slave_src.yaml.
master_src.yaml
kind: Namespace
apiVersion: v1
metadata:
name: innovation
labels:
name: innovation
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: master
namespace: innovation
spec:
replicas: 1
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
securityContext:
runAsUser: 1000
fsGroup: 1000
containers:
- name: master
imagePullPolicy: Always
securityContext:
runAsUser: 1000
runAsNonRoot: true
image: reg-dhc.app.corpintra.net/galiani/innovation:mastertest
ports:
- protocol: TCP
containerPort: 5000
imagePullSecrets:
- name: galiani-innovation-pull-secret
---
apiVersion: v1
kind: Service
metadata:
name: master
namespace: innovation
spec:
ports:
- protocol: TCP
port: 5000
targetPort: 5000
selector:
app: myapp
slave_src.yaml
kind: Namespace
apiVersion: v1
metadata:
name: innovation
labels:
name: innovation
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: slave
namespace: innovation
spec:
replicas: 1
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
securityContext:
runAsUser: 1000
fsGroup: 1000
containers:
- name: slave
imagePullPolicy: Always
securityContext:
runAsUser: 1000
runAsNonRoot: true
image: reg-dhc.app.corpintra.net/galiani/innovation:slavetest
ports:
- protocol: TCP
containerPort: 5001
imagePullSecrets:
- name: galiani-innovation-pull-secret
---
apiVersion: v1
kind: Service
metadata:
name: slave
namespace: innovation
spec:
selector:
app: myapp
ports:
- protocol: TCP
port: 5001
targetPort: 5001
I have also created a network policy to allow the traffic between the two services. The yaml used to define the network policy is the following.
networkpolicy_src.yaml
kind: NetworkPolicy
apiVersion: extensions/v1beta1
metadata:
name: ingress-to-all
namespace: innovation
spec:
podSelector:
matchLabels:
app: myapp
ingress:
- from:
- podSelector:
matchLabels:
app: myapp
ports:
- port: 5000
protocol: TCP
- port: 5001
protocol: TCP
policyTypes:
- Ingress
The connection between the master service and the slave service is not working. I can access to the master and slave independently. Nevertheless, when I try to make a POST to the master (using curl) and it should connect to the slave, I get the following error:
curl: (52) Empty reply from server
Thank you for your help in advance!
For the new question I have regarding the connection using traefik. Here is the yaml of the trafik ingress:
kind: Ingress
apiVersion: extensions/v1beta1
metadata:
name: ingress-innovation
namespace: innovation
annotations:
traefik.frontend.rule.type: PathPrefixStrip
spec:
rules:
- http:
paths:
- path: /master
backend:
serviceName: master
servicePort: 5000
- path: /slave
backend:
serviceName: slave
servicePort: 5001
I have also corrected the networkpolicy yaml and now it is:
kind: NetworkPolicy
apiVersion: extensions/v1beta1
metadata:
name: master-to-slave
namespace: innovation
spec:
podSelector:
matchLabels:
app: app-slave
ingress:
- ports:
- port: 5000
protocol: TCP
- port: 5001
protocol: TCP
- from:
- namespaceSelector:
matchLabels:
app: app-master
Thanks again for your help!

The problem could be having same label app: myapp for both master and slave. Change the label to app: master for master deployment and service and app: slave for slave deployment and service.

Related

websocket connection between services in kubernetes

I have two services in Kubernetes (k3s) one is backend and another is frontend and I need the transfer data from backend to fronted through websocket. When I debug in docker-compose it works. So when run in docker-compose on backend I have (python):
import websockets
...
start_server = websockets.serve(server, "0.0.0.0", 8001)
...
On front I have (html):
...
var ws = new WebSocket('ws://localhost:8001');
...
And I have function onclose:
ws.onclose = function (e) {
document.getElementById("textArea").innerHTML = 'Socket is closed. Reconnect will be attempted in 5 seconds...' + e.reason;
setTimeout(function () {
connect();
}, 5000);
};
And it's works as should.
When I use k3s I get Socket is closed. Reconnect will be attempted in 5 seconds...
There is my backend (app) deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-deployment
namespace: my-namespace
labels:
app: app
spec:
replicas: 1
selector:
matchLabels:
app: app
template:
metadata:
labels:
app: app
name: app
spec:
containers:
- args:
- python3.9
- main.py
image: app
imagePullPolicy: Never
name: app
ports:
- name: app
containerPort: 8001
And service manifest for backend:
apiVersion: v1
kind: Service
metadata:
name: app-service
namespace: my-namespace
labels:
app: app-service
spec:
type: ClusterIP
ports:
- port: 8001
targetPort: app
selector:
app: app
My frontend deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend-deployment
namespace: my-namespace
labels:
app: frontend
spec:
replicas: 1
selector:
matchLabels:
app: frontend
template:
metadata:
labels:
app: frontend
name: frontend
spec:
containers:
- args:
- python3.9
- -m
- uvicorn
- main:app
- --reload
- --port
- "8000"
- --host
- 0.0.0.0
image: frontend
imagePullPolicy: Never
name: frontend
ports:
- name: frontend
containerPort: 8000
Service for frontend:
apiVersion: v1
kind: Service
metadata:
name: frontend-service
namespace: my-namespace
labels:
app: frontend
spec:
type: ClusterIP
ports:
- protocol: TCP
port: 8000
targetPort: frontend
selector:
app: frontend
I have Ingress Controller (traefic) that routes traffic to cluster:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: traefik-ingress
namespace: my-namespace
annotations:
kubernetes.io/ingress.class: traefik
ingress.kubernetes.io/ssl-redirect: "false"
spec:
rules:
- host: my-dns.ca
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: frontend-service
port:
number: 8000
I changed the code in html in this way:
var ws = new WebSocket('ws://app-service.my-namespace.svc.cluster.local:8001');
but get ````Socket is closed. Reconnect will be attempted in 5 seconds... ```
How to get connection between services or debug connection?

Kubernetes ingress nginx "not found" (les jackson tutorial)

I'm following the tutorial from Less Jackson about Kubernetes but I'm stuck around 04:40:00. I always get an 404 returned from my Ingress Nginx Controller. I followed everything he does, but I can't get it to work.
I also read that this could have something to do with IIS, so I stopped the default website which also runs on port 80.
The apps running in the containers are .NET Core.
Commands-deply & cluster ip
apiVersion: apps/v1
kind: Deployment
metadata:
name: commands-depl
spec:
replicas: 1
selector:
matchLabels:
app: commandservice
template:
metadata:
labels:
app: commandservice
spec:
containers:
- name: commandservice
image: maartenvissershub/commandservice:latest
---
apiVersion: v1
kind: Service
metadata:
name: commands-clusterip-srv
spec:
type: ClusterIP
selector:
app: commandservice
ports:
- name: commandservice
protocol: TCP
port: 80
targetPort: 80
Platforms-depl & cluster ip
apiVersion: apps/v1
kind: Deployment
metadata:
name: platforms-depl
spec:
replicas: 1
selector:
matchLabels:
app: platformservice
template:
metadata:
labels:
app: platformservice
spec:
containers:
- name: platformservice
image: maartenvissershub/platformservice:latest
---
apiVersion: v1
kind: Service
metadata:
name: platforms-clusterip-srv
spec:
type: ClusterIP
selector:
app: platformservice
ports:
- name: platformservice
protocol: TCP
port: 80
targetPort: 80
Ingress-srv
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-srv
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: 'true'
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: acme.com
http:
paths:
- path: /api/platforms
pathType: Prefix
backend:
service:
name: platforms-clusterip-srv
port:
number: 80
- path: /api/c/platforms
pathType: Prefix
backend:
service:
name: commands-clusterip-srv
port:
number: 80
I also added this to my hosts file:
127.0.0.1 acme.com
And I applied this from the nginx documentation:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.3.0/deploy/static/provider/cloud/deploy.yaml
kubectl get ingress
kubectl describe ing ingress-srv
Dockerfile CommandService
FROM mcr.microsoft.com/dotnet/sdk:5.0 AS build-env
WORKDIR /app
COPY *.csproj ./
RUN dotnet restore
COPY . ./
RUN dotnet publish -c Release -o out
FROM mcr.microsoft.com/dotnet/aspnet:5.0
WORKDIR /app
COPY --from=build-env /app/out .
ENTRYPOINT [ "dotnet", "PlatformService.dll" ]
kubectl logs ingress-nginx-controller-6bf7bc7f94-v2jnp -n ingress-nginx
Am I missing something?
I found my solution. There was a process running on port 80 with pid 4: 0.0.0.0:80. I could stop it using NET stop HTTP in an admin cmd.
I noticed that running kubectl get services -n=ingress-nginx resulted a ingress-nginx-controll, which is fine, but with an external-ip . Running kubectl get ingress also didn't show an ADDRESS. Now they both show "localhost" as value for external-ip and ADDRESS.
Reference: Port 80 is being used by SYSTEM (PID 4), what is that?
So this can occur from several reasons:
Pods or containers are not working - try using kubectl get pods -n <your namespace> to see if any are not in 'running' status.
Assuming they are running, try kubectl describe pod <pod name> -n <your namespace> to see the events on your pod just to make sure its running properly.
I have noticed you are not exposing ports in your deployment. please update your deployments like so:
apiVersion: apps/v1
kind: Deployment
metadata:
name: platforms-depl
spec:
replicas: 1
selector:
matchLabels:
app: platformservice
template:
metadata:
labels:
app: platformservice
spec:
containers:
- name: platformservice
image: maartenvissershub/platformservice:latest
ports:
- name: http
containerPort: 80
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: platforms-clusterip-srv
spec:
type: ClusterIP
selector:
app: platformservice
ports:
- name: platformservice
protocol: TCP
port: 80
targetPort: 80
apiVersion: apps/v1
kind: Deployment
metadata:
name: commands-depl
spec:
replicas: 1
selector:
matchLabels:
app: commandservice
template:
metadata:
labels:
app: commandservice
spec:
containers:
- name: commandservice
image: maartenvissershub/commandservice:latest
ports:
- name: http
containerPort: 80
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: commands-clusterip-srv
spec:
type: ClusterIP
selector:
app: commandservice
ports:
- name: commandservice
protocol: TCP
port: 80
targetPort: 80
Hope this helps!

Cant configure ingress on gcloud properly

I am trying to deploy a simple app on google cloud. I am testing the gitlab kluster integration.
Here is my yaml k8:
---
apiVersion: v1
kind: Service
metadata:
name: service
namespace: "my-service"
labels:
run: service
spec:
type: NodePort
selector:
run: "service"
ports:
- port: 9000
targetPort: 9000
protocol: TCP
name: http
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: service-api
namespace: "my-service"
labels:
run: service
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: service-v1
namespace: "my-service"
labels:
run: service
spec:
replicas: 1
selector:
matchLabels:
run: service
template:
metadata:
labels:
run: service
spec:
serviceAccountName: service-api
containers:
- name: service
image: "gcr.io/test/service:latest"
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9000
protocol: TCP
volumeMounts:
- name: test
mountPath: /usr/test
volumes:
- name: test
emptyDir: {}
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: "service-ingress"
namespace: "my-service"
labels:
run: service
spec:
rules:
- http:
paths:
- path: /*
backend:
serviceName: service
servicePort: 9000
If I log into the pod I can curl the service on the nodePort designed IP, but if I try to hit the ingress address I just get a error:
I am not sure why there are 2 backend services on the loadbalancer that is created automatically, the one that points to my app shows as unhealthy
[load balancer backends1
You need to define readiness probe in your pod spec because GKE ingress controller picks up health check from the readiness probe.
apiVersion: apps/v1
kind: Deployment
metadata:
name: service-v1
namespace: "my-service"
labels:
run: service
spec:
replicas: 1
selector:
matchLabels:
run: service
template:
metadata:
labels:
run: service
spec:
serviceAccountName: service-api
containers:
- name: service
image: "gcr.io/test/service:latest"
imagePullPolicy: IfNotPresent
readinessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 3
periodSeconds: 3
ports:
- containerPort: 9000
protocol: TCP
volumeMounts:
- name: test
mountPath: /usr/test
volumes:
- name: test
emptyDir: {}

How to deny default but allow HTTP and TCP traffic in istio kubernetes cluster?

I have a cluster with istio injection enabled and cockroach db stateful set defined:
apiVersion: v1
kind: ServiceAccount
metadata:
name: cockroachdb-serviceaccount
---
apiVersion: v1
kind: Service
metadata:
# This service is meant to be used by clients of the database. It exposes a ClusterIP that will
# automatically load balance connections to the different database pods.
name: cockroachdb-public
labels:
app: cockroachdb
spec:
ports:
# The main port, served by gRPC, serves Postgres-flavor SQL, internode
# traffic and the cli.
- port: 26257
targetPort: 26257
name: tcp
# The secondary port serves the UI as well as health and debug endpoints.
- port: 8080
targetPort: 8080
name: http
selector:
app: cockroachdb
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: cockroachdb-statefulset
labels:
version: v20.1.2
spec:
serviceName: cockroachdb
replicas: 3
selector:
matchLabels:
app: cockroachdb
template:
metadata:
labels:
app: cockroachdb
version: v20.1.2
spec:
serviceAccountName: cockroachdb-serviceaccount
containers:
- name: cockroachdb
image: cockroachdb/cockroach:v20.1.2
ports:
- containerPort: 26257
name: tcp
- containerPort: 8080
name: http
volumeMounts:
- name: datadir
mountPath: /cockroach/cockroach-data
env:
- name: COCKROACH_CHANNEL
value: kubernetes-insecure
command:
- "/bin/bash"
- "-ecx"
# The use of qualified `hostname -f` is crucial:
# Other nodes aren't able to look up the unqualified hostname.
- "exec /cockroach/cockroach start --logtostderr --insecure --advertise-host $(hostname -f) --http-addr 0.0.0.0 --join cockroachdb-statefulset-0.cockroachdb,cockroachdb-statefulset-1.cockroachdb,cockroachdb-statefulset-2.cockroachdb --cache 25% --max-sql-memory 25%"
# No pre-stop hook is required, a SIGTERM plus some time is all that's
# needed for graceful shutdown of a node.
terminationGracePeriodSeconds: 5
volumes:
- name: datadir
persistentVolumeClaim:
claimName: datadir
podManagementPolicy: Parallel
updateStrategy:
type: RollingUpdate
volumeClaimTemplates:
- metadata:
name: datadir
spec:
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: 4Gi
---
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
name: cockroachdb-public
spec:
host: cockroachdb-public
trafficPolicy:
tls:
mode: ISTIO_MUTUAL
---
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: cockroachdb-public
spec:
hosts:
- cockroachdb-public
http:
- match:
- port: 8080
route:
- destination:
host: cockroachdb-public
port:
number: 8080
tcp:
- match:
- port: 26257
route:
- destination:
host: cockroachdb-public
port:
number: 26257
and a service that accesses it:
apiVersion: v1
kind: ServiceAccount
metadata:
name: downstream-serviceaccount
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: downstream-deployment-v1
labels:
app: downstream
version: v1
spec:
replicas: 1
selector:
matchLabels:
app: downstream
version: v1
template:
metadata:
labels:
app: downstream
version: v1
spec:
serviceAccountName: downstream-serviceaccount
containers:
- name: downstream
image: downstream:0.1
ports:
- containerPort: 80
env:
- name: DATABASE_URL
value: postgres://roach#cockroachdb-public:26257/roach?sslmode=disable
---
apiVersion: v1
kind: Service
metadata:
name: downstream-service
labels:
app: downstream
spec:
type: ClusterIP
selector:
app: downstream
ports:
- port: 80
targetPort: 80
name: http
protocol: TCP
---
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
name: downstream-service
spec:
host: downstream-service
trafficPolicy:
tls:
mode: ISTIO_MUTUAL
---
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: downstream-service
spec:
hosts:
- downstream-service
http:
- name: "downstream-service-routes"
match:
- port: 80
route:
- destination:
host: downstream-service
port:
number: 80
Now I'd like to restrict access to cockroach db only to downstream-service and to cockroachdb itself (since nodes need intercommunication between each other).
I'm trying to restrict the traffic with something like this:
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: default-deny-all
namespace: default
spec:
{}
---
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: cockroachdb-authorizationpolicy-allow-from-downstream
namespace: default
spec:
selector:
matchLabels:
app: cockroachdb
action: ALLOW
rules:
- from:
- source:
principals: ["cluster.local/ns/default/sa/downstream-serviceaccount"]
- to:
- operation:
ports: ["26257"]
---
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: cockroachdb-authorizationpolicy-allow-from-cockroachdb
namespace: default
spec:
selector:
matchLabels:
app: cockroachdb
action: ALLOW
rules:
- from:
- source:
principals: ["cluster.local/ns/default/sa/cockroachdb-serviceaccount"]
- to:
- operation:
ports: ["26257"]
but doesn't seem to do anything. I can still e.g. access cockroachdb-public:8080 cluster HTTP UI from downstream-service.
Now when I add the following:
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: default-deny-all-to-cockroachdb
namespace: default
spec:
selector:
matchLabels:
app: cockroachdb
action: DENY
rules:
- to:
- operation:
ports: ["26257"]
then all the traffic is blocked (including the traffic between cockroachdb nodes).
What am I doing wrong here?
You are having the same problem that a guy couple of days ago. In your authorization policy you have two policies:
service account downstream-serviceaccount (and cockroachdb-serviceaccount for the other authorization policy) from default namespace can access the service with labels app: cockroachdb on any port on default namespace.
Any service account, from any namespace can access the service with labels app: cockroachdb, on port 26257.
In order to make it an AND, you would do this:
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: cockroachdb-authorizationpolicy-allow-from-cockroachdb
namespace: default
spec:
selector:
matchLabels:
app: cockroachdb
action: ALLOW
rules:
- from:
- source:
principals: ["cluster.local/ns/default/sa/cockroachdb-serviceaccount"]
to: <- remove the dash from here
- operation:
ports: ["26257"]
Same with the other AuthorizationPolicy object. Also note that you don't need to explicitly create a DENY policy. When you create an ALLOW one, it automatically denies everything else.

How to create a HTTPS route to a Service that is listening on Https with Traefik, and Kubernetes

I'm a newbie in kubernetes and Traefik.
I follow up that tutorial:
https://docs.traefik.io/user-guides/crd-acme/
And I changed it to use my Service in Scala, that it is under https and 9463 port.
I'm trying to deploy my Scala service with kubernetes and traefik.
When I forward directly to the service :
kubectl port-forward service/core-service 8001:9463
And I perform a curl -k 'https://localhost:8001/health' :
I get the "{Message:Ok}"
But when I perform a port forward to traefik
kubectl port-forward service/traefik 9463:9463 -n default
And perform a curl -k 'https://ejemplo.com:9463/tls/health'
I get an "Internal server error"
I guess the problem is that my "core-service" is listening over HTTPS protocol, that's what I add scheme:https.
I tried to find the solution over the documentation but it is confusing.
Those are my yml files:
Services.yaml
apiVersion: v1
kind: Service
metadata:
name: traefik
spec:
ports:
- protocol: TCP
name: admin
port: 8080
- protocol: TCP
name: websecure
port: 9463
selector:
app: traefik
---
apiVersion: v1
kind: Service
metadata:
name: core-service
spec:
ports:
- protocol: TCP
name: websecure
port: 9463
selector:
app: core-service
Deployment.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
namespace: default
name: traefik-ingress-controller
---
kind: Deployment
apiVersion: apps/v1
metadata:
namespace: default
name: traefik
labels:
app: traefik
spec:
replicas: 1
selector:
matchLabels:
app: traefik
template:
metadata:
labels:
app: traefik
spec:
serviceAccountName: traefik-ingress-controller
containers:
- name: traefik
image: traefik:v2.0
args:
- --api.insecure
- --accesslog
- --entrypoints.websecure.Address=:9463
- --providers.kubernetescrd
- --certificatesresolvers.default.acme.tlschallenge
- --certificatesresolvers.default.acme.email=foo#you.com
- --certificatesresolvers.default.acme.storage=acme.json
# Please note that this is the staging Let's Encrypt server.
# Once you get things working, you should remove that whole line altogether.
- --certificatesresolvers.default.acme.caserver=https://acme-staging-v02.api.letsencrypt.org/directory
ports:
- name: websecure
containerPort: 9463
- name: admin
containerPort: 8080
---
kind: Deployment
apiVersion: apps/v1
metadata:
namespace: default
name: core-service
labels:
app: core-service
spec:
replicas: 1
selector:
matchLabels:
app: core-service
template:
metadata:
labels:
app: core-service
spec:
containers:
- name: core-service
image: core-service:0.1.4-SNAPSHOT
ports:
- name: websecure
containerPort: 9463
livenessProbe:
httpGet:
port: 9463
scheme: HTTPS
path: /health
initialDelaySeconds: 10
IngressRoute2.yaml
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: ingressroutetls
namespace: default
spec:
entryPoints:
- websecure
routes:
- match: Host(`ejemplo.com`) && PathPrefix(`/tls`)
kind: Rule
services:
- name: core-service
port: 9463
scheme: https
tls:
certResolver: default
From the docs
A TLS router will terminate the TLS connection by default. However,
the passthrough option can be specified to set whether the requests
should be forwarded "as is", keeping all data encrypted.
In your case SSL Passthrough need to be enabled because the pod is expecting HTTPS traffic.
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: ingressroutetls
namespace: default
spec:
entryPoints:
- websecure
routes:
- match: Host(`ejemplo.com`) && PathPrefix(`/tls`)
kind: Rule
services:
- name: core-service
port: 9463
scheme: https
tls:
certResolver: default
passthrough: true