Tarefik Ingress does not return HTTP/HTTPS response. - kubernetes

My website doesn't generate any HTTP certs using LetsEncrypt, doesn't even return a response. This is for test purposes so is using a basic NGINX image that I want to serve over HTTPS.
I have an ingress type that looks like this:
apiVersion: v1
kind: Service
metadata:
name: nginxtest
labels:
app: nginx
spec:
selector:
k8s-app: traefik-ingress-lb
ports:
- name: http
port: 80
targetPort: 80
protocol: TCP
selector:
app: nginx
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nginx-ingress
annotations:
kubernetes.io/ingress.class: traefik
spec:
rules:
- host: nginx.example.com
http:
paths:
- path: /
backend:
serviceName: nginxtest
servicePort: http
My TOML Config Map looks like the below:
apiVersion: v1
kind: ConfigMap
metadata:
name: traefik-conf
namespace: kube-system
data:
traefik.toml: |
defaultEntryPoints = ["http","https"]
debug = false
logLevel = "ERROR"
#Config to redirect http to https
[entryPoints]
[entryPoints.http]
address = ":80"
compress = true
[entryPoints.http.redirect]
entryPoint = "https"
[entryPoints.https]
address = ":443"
compress = true
[entryPoints.https.tls]
[api]
[api.statistics]
recentErrors = 10
[kubernetes]
[ping]
entryPoint = "http"
[accessLog]
[acme]
email = "a#b.com"
storage = "certs/acme.json"
acmeLogging = true
entryPoint = "https"
OnHostRule = true
#caServer = "https://acme-staging-v02.api.letsencrypt.org/directory"
[acme.httpChallenge]
entryPoint="http"
Any help is appreciated, please point out any errors.

Related

Error: connect ECONNREFUSED 127.0.0.1:443 http://ingress-nginx-controller.ingress-nginx.svc.cluster.local

kubectl get namespace
default Active 3h33m
ingress-nginx Active 3h11m
kube-node-lease Active 3h33m
kube-public Active 3h33m
kube-system Active 3h33m
kubectl get services -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP
PORT(S) AGE
ingress-nginx-controller LoadBalancer 10.102.205.190 localhost 80:31378/TCP,443:31888/TCP 3h12m
ingress-nginx-controller-admission ClusterIP 10.103.97.209 <none> 443/TCP 3h12m
When I am making the request from nextjs getInitialProps http://ingress-nginx-controller.ingress-nginx.svc.cluster.local/api/users/currentuser then its throwing an error Error: connect ECONNREFUSED 127.0.0.1:443.
LandingPage.getInitialProps = async () => {
if (typeof window === "undefined") {
const { data } = await axios.get(
"http://ingress-nginx-controller.ingress-nginx.svc.cluster.local/api/users/currentuser",
{
headers: {
Host: "ticketing.dev",
},
}
);
return data;
} else {
const { data } = await axios.get("/api/users/currentuser");
return data;
}
};
My auth.deply.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: auth-depl
spec:
replicas: 1
selector:
matchLabels:
app: auth
template:
metadata:
labels:
app: auth
spec:
containers:
- name: auth
image: sajeebxn/auth
env:
- name: MONGO_URI
value: 'mongodb://tickets-mongo-srv:27017/auth'
- name: JWT_KEY
valueFrom:
secretKeyRef:
name: jwt-secret
key: JWT_KEY
---
apiVersion: v1
kind: Service
metadata:
name: auth-srv
spec:
selector:
app: auth
ports:
- name: auth
protocol: TCP
port: 3000
targetPort: 3000
And my ingress-srv.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-service
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: "true"
spec:
tls:
- hosts:
- ticketing.dev
# secretName: e-ticket-secret
rules:
- host: ticketing.dev
http:
paths:
- path: /api/users/?(.*)
pathType: Prefix
backend:
service:
name: auth-srv
port:
number: 3000
- path: /?(.*)
pathType: Prefix
backend:
service:
name: client-srv
port:
number: 3000
Try using http://ingress-nginx-controller.ingress-nginx/api/users/currentuser.
This worked for me

Kubernetes: Serve paths from different service

I have two services one for serving static files and other for serving apis. I have created a single ingress controller for these.
I want to serve / from service1 and /api from service2. My services are running fine.
but I am getting 404 for /api path.
Below is my kubernetes yaml
---
apiVersion: v1
kind: Namespace
metadata:
name: "myapp"
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: "service2"
namespace: "myapp"
labels:
project: "myapp"
env: "prod"
spec:
replicas: 2
selector:
matchLabels:
project: "myapp"
run: "service2"
matchExpressions:
- {key: project, operator: In, values: ["myapp"]}
template:
metadata:
labels:
project: "myapp"
env: "prod"
run: "service2"
spec:
securityContext:
sysctls:
- name: net.ipv4.ip_local_port_range
value: "1024 65535"
imagePullSecrets:
- name: tildtr
containers:
- name: "node-container"
image: "images2"
imagePullPolicy: Always
ports:
- containerPort: 3000
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: "service1"
namespace: "myapp"
labels:
project: "myapp"
env: "prod"
spec:
replicas: 2
selector:
matchLabels:
project: "myapp"
run: "service1"
matchExpressions:
- {key: project, operator: In, values: ["myapp"]}
template:
metadata:
labels:
project: "myapp"
env: "prod"
run: "service1"
spec:
securityContext:
sysctls:
- name: net.ipv4.ip_local_port_range
value: "1024 65535"
imagePullSecrets:
- name: tildtr
containers:
- name: "nginx-container"
image: "image1"
imagePullPolicy: Always
ports:
- containerPort: 3000
---
apiVersion: v1
kind: Service
metadata:
name: "service1"
namespace: "myapp"
labels:
project: "myapp"
env: "prod"
run: "service1"
spec:
selector:
project: "myapp"
type: ClusterIP
ports:
- name: "service1"
port: 80
targetPort: 80
selector:
run: "service1"
---
apiVersion: v1
kind: Service
metadata:
name: "service2"
namespace: "myapp"
labels:
project: "myapp"
env: "prod"
run: "service2"
spec:
selector:
project: "myapp"
type: ClusterIP
ports:
- name: "service2"
port: 80
targetPort: 3000
selector:
run: "service2"
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: "myapp"
namespace: "myapp"
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/server-alias: "*.xyz.in"
nginx.ingress.kubernetes.io/server-snippet: |
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 50;
keepalive_requests 100000;
reset_timedout_connection on;
client_body_timeout 20;
send_timeout 2;
types_hash_max_size 2048;
client_max_body_size 20M;
gzip on;
gzip_min_length 10240;
gzip_proxied expired no-cache no-store private auth;
gzip_types text/plain text/css text/xml text/javascript application/x-javascript application/xml application/json;
gzip_disable "MSIE [1-6]\.";
spec:
rules:
- host: "myhost.in"
http:
paths:
- path: /api
backend:
serviceName: "service2"
servicePort: 80
- path: /
backend:
serviceName: "service1"
servicePort: 80
And this is my ingress desc.
Name: myapp
Namespace: myapp
Address: 10.100.160.106
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
Host Path Backends
---- ---- --------
ds-vaccination.timesinternet.in
/api service2:3000 (10.0.1.113:3000,10.0.2.123:3000)
/ service1:80 (10.0.1.37:80,10.0.2.59:80)
Annotations: nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/server-alias: *.xyz.in
nginx.ingress.kubernetes.io/server-snippet:
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 50;
keepalive_requests 100000;
reset_timedout_connection on;
client_body_timeout 20;
send_timeout 2;
types_hash_max_size 2048;
client_max_body_size 20M;
gzip on;
gzip_min_length 10240;
gzip_proxied expired no-cache no-store private auth;
gzip_types text/plain text/css text/xml text/javascript application/x-javascript application/xml application/json;
gzip_disable "MSIE [1-6]\.";
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Sync 6s (x5 over 31m) nginx-ingress-controller Scheduled for sync
Normal Sync 6s (x5 over 31m) nginx-ingress-controller Scheduled for sync
remove this annotation and try
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
if your cluster is supporting old API : extensions/v1beta1
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: service-ingress
spec:
rules:
- host: service1.example.com
http:
paths:
- backend:
serviceName: service1
servicePort: 80
- host: service2.example.com
http:
paths:
- backend:
serviceName: service2
servicePort: 80

Problem Sub Path Ingress Controller for Backend Service

I have problem setting path ingress controller for backend service. For example i want setup :
frontend app with angular (Path :/)
backend service with NodeJs (Path :/webservice).
NodeJS : Index.js
const express = require('express')
const app = express()
const port = 4000
app.get('/', (req, res) => res.send('Welcome to myApp!'))
app.use('/data/office', require('./roffice'));
app.listen(port, () => console.log(`Example app listening on port ${port}!`))
Another Route:roffice.js
var express = require('express')
var router = express.Router()
router.get('/getOffice', async function (req, res) {
res.send('Get Data Office')
});
module.exports = router
Deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: ws-stack
spec:
selector:
matchLabels:
run: ws-stack
replicas: 2
template:
metadata:
labels:
run: ws-stack
spec:
containers:
- name: ws-stack
image: wsstack/node/img
imagePullPolicy: IfNotPresent
ports:
- containerPort: 4000
Service.yaml
apiVersion: v1
kind: Service
metadata:
name: service-wsstack
labels:
run: service-wsstack
spec:
type: NodePort
ports:
- port: 80
protocol: TCP
nodePort: 30009
targetPort: 4000
selector:
run: ws-stack
ingress.yaml
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: stack-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/use-regex: "true"
spec:
rules:
- host: hello-world.info
- http:
paths:
- path: /
backend:
serviceName: service-ngstack --> frondend
servicePort: 80
- path: /webservice
backend:
serviceName: service-wsstack --> backend
servicePort: 80
i setup deployment, service and ingress successfully. but when i called with curl
curl http://<minikubeip>/webservice --> Welcome to myApp! => Correct
curl http://<minikubeip>/webservice/data/office/getOffice --> Welcome to myApp! => Not correct
if i called another route, the result is the same 'Welcome to myApp'. But if i used Nodeport
curl http://<minikubeip>:30009/data/office/getOffice => 'Get Data Office', working properly.
What is the problem? any solution? Thank you
TL;DR
nginx.ingress.kubernetes.io/rewrite-target: /$2
path: /webservice($|/)(.*)
Explanation
The problem is from that line in your ingress:
nginx.ingress.kubernetes.io/rewrite-target: /
You're telling nginx to rewrite your url to / whatever it matched.
/webservice => /
/webservice/data/office/getOffice => /
To do what you're trying to do use regex, here is a simple example:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: stack-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
nginx.ingress.kubernetes.io/use-regex: "true"
spec:
rules:
- host: hello-world.info
- http:
paths:
- path: /
backend:
serviceName: service-ngstack --> frondend
servicePort: 80
- path: /webservice($|/)(.*)
backend:
serviceName: service-wsstack --> backend
servicePort: 80
This way you're asking nginx to rewrite your url with the second matching group.
Finally it gives you:
/webservice => /
/webservice/data/office/getOffice => /data/office/getOffice

How to fix "bad certificate error" in traefik 2.0?

I'm setting up traefik 2.0-alpha with Let's Encrypt certificates inside GKE, but now i'm in stupor with "server.go:3012: http: TLS handshake error from 10.32.0.1:2244: remote error: tls: bad certificate" error in container logs.
Connections via http working fine. When i try to connect via https, traefik return 404 with its own default certificates.
I found same problem for traefik v1 on github. Solution was in adding to config:
InsecureSkipVerify = true
passHostHeader = true
It doesn't help me.
Here is my configmap
apiVersion: v1
kind: ConfigMap
metadata:
name: traefik-ingress-configmap
namespace: kube-system
data:
traefik.toml: |
[Global]
sendAnonymousUsage = true
debug = true
logLevel = "DEBUG"
[ServersTransport]
InsecureSkipVerify = true
[entrypoints]
[entrypoints.web]
address = ":80"
[entryPoints.web-secure]
address = ":443"
[entrypoints.mongo-port]
address = ":11111"
[providers]
[providers.file]
[tcp] # YAY!
[tcp.routers]
[tcp.routers.everything-to-mongo]
entrypoints = ["mongo-port"]
rule = "HostSNI(`*`)" # Catches every request
service = "database"
[tcp.services]
[tcp.services.database.LoadBalancer]
[[tcp.services.database.LoadBalancer.servers]]
address = "mongodb-service.default.svc:11111"
[http]
[http.routers]
[http.routers.for-jupyterx-https]
entryPoints = ["web-secure"] # won't listen to entrypoint mongo-port
# rule = "Host(`clients-ui.ddns.net`)"
# rule = "Path(`/jupyterx`)" # abo /jupyterx/*
rule = "PathPrefix(`/jupyterx`)"
service = "jupyterx"
[http.routers.for-jupyterx.tls]
[http.routers.for-jupyterx-http]
entryPoints = ["web"] # won't listen to entrypoint mongo-port
# rule = "Host(`clients-ui.ddns.net`)"
# rule = "Path(`/jupyterx`)" # abo /jupyterx/*
rule = "PathPrefix(`/jupyterx`)"
service = "jupyterx"
[http.services]
[http.services.jupyterx.LoadBalancer]
PassHostHeader = true
# InsecureSkipVerify = true
[[http.services.jupyterx.LoadBalancer.servers]]
url = "http://jupyter-service.default.svc/"
weight = 100
[acme] # every router with TLS enabled will now be able to use ACME for its certificates
email = "account#mail.com"
storage = "acme.json"
# onHostRule = true # dynamic generation based on the Host() & HostSNI() matchers
caServer = "https://acme-staging-v02.api.letsencrypt.org/directory"
[acme.httpChallenge]
entryPoint = "web" # used during the challenge
And DaemonSet yaml:
# ---
# apiVersion: v1
# kind: ServiceAccount
# metadata:
# name: traefik-ingress-controller
# namespace: kube-system
---
kind: DaemonSet
apiVersion: extensions/v1beta1
metadata:
name: traefik-ingress-controller
namespace: kube-system
labels:
k8s-app: traefik-ingress-lb
spec:
template:
metadata:
labels:
k8s-app: traefik-ingress-lb
name: traefik-ingress-lb
spec:
serviceAccountName: traefik-ingress-controller
terminationGracePeriodSeconds: 60
volumes:
# - name: traefik-ui-tls-cert
# secret:
# secretName: traefik-ui-tls-cert
- name: traefik-ingress-configmap
configMap:
name: traefik-ingress-configmap
containers:
- image: traefik:2.0 # The official v2.0 Traefik docker image
name: traefik-ingress-lb
ports:
- name: http
containerPort: 80
hostPort: 80
- name: web-secure
containerPort: 443
hostPort: 443
- name: admin
containerPort: 8080
- name: mongodb
containerPort: 11111
volumeMounts:
- mountPath: "/config"
name: "traefik-ingress-configmap"
args:
- --api
- --configfile=/config/traefik.toml
---
kind: Service
apiVersion: v1
metadata:
name: traefik-ingress-service
namespace: kube-system
spec:
selector:
k8s-app: traefik-ingress-lb
ports:
- protocol: TCP
port: 80
name: web
- protocol: TCP
port: 443
name: web-secure
- protocol: TCP
port: 8080
name: admin
- port: 11111
protocol: TCP
name: mongodb
type: LoadBalancer
loadBalancerIP: 1.1.1.1
Have any suggestions, how to fix it?
Due to lack of manuals for traefik2.0-alpha, config file was written using only manual from traefik official page.
There is a "routers for HTTP & HTTPS" configuration example here https://docs.traefik.io/v2.0/routing/routers/ look like:
[http.routers]
[http.routers.Router-1-https]
rule = "Host(`foo-domain`) && Path(`/foo-path/`)"
service = "service-id"
[http.routers.Router-1.tls] # will terminate the TLS request
[http.routers.Router-1-http]
rule = "Host(`foo-domain`) && Path(`/foo-path/`)"
service = "service-id"
But working config looks like:
[http.routers]
[http.routers.Router-1-https]
rule = "Host(`foo-domain`) && Path(`/foo-path/`)"
service = "service-id"
[http.routers.Router-1-https.tls] # will terminate the TLS request
[http.routers.Router-1-http]
rule = "Host(`foo-domain`) && Path(`/foo-path/`)"
service = "service-id"
So, in my config string
[http.routers.for-jupyterx.tls]
should be changed on
[http.routers.for-jupyterx-https.tls]

istio load balancing of a single service with multiple versions

I was able to achieve load-balancing with sample istio applications
https://github.com/piomin/sample-istio-services
https://istio.io/docs/guides/bookinfo/
But was not able to get istio load-balancing working with single private service having 2 versions. Example: 2 consul servers with different versions .
Service and pod definition :
apiVersion: v1
kind: Service
metadata:
name: consul-test
labels:
app: test
spec:
ports:
- port: 8500
name: http
selector:
app: test
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: consul-test-v1
spec:
replicas: 1
template:
metadata:
labels:
app: test
version: v1
spec:
containers:
- name: consul-test-v1
image: consul:latest
ports:
- containerPort: 8500
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: consul-test-v2
spec:
replicas: 1
template:
metadata:
labels:
app: test
version: v2
spec:
containers:
- name: consul-test-v2
image: consul:1.1.0
ports:
- containerPort: 8500
Gateway definition:
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: http-gateway
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: con-gateway
spec:
hosts:
- "*"
gateways:
- http-gateway
http:
- match:
- uri:
exact: /catalog
route:
- destination:
host: consul-test
port:
number: 8500
Routing rules in virtual service:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: consul-test
spec:
hosts:
- consul-test
gateways:
- con-gateway
- mesh
http:
- route:
- destination:
host: consul-test
subset: v1
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: consul-test
spec:
host: consul-test
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2
Though I route all traffic ( http requests ) to consul server version v1, my http requests on consul-service lands on v1 and v2 alternately i.e, it follows Round-Robin rule .
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
consul-test ClusterIP 10.97.200.140 <none> 8500/TCP 9m
$ curl -L http://10.97.200.140:8500/v1/catalog/nodes
[
{
"ID": "ebfa341b-4557-a392-9f8a-8ee307113faa",
"Node": "consul-test-v1-765dd566dd-6cmj9",
"Address": "127.0.0.1",
"Datacenter": "dc1",
"TaggedAddresses": {
"lan": "127.0.0.1",
"wan": "127.0.0.1"
},
"Meta": {
"consul-network-segment": ""
},
"CreateIndex": 9,
"ModifyIndex": 10
}
]
$ curl -L http://10.97.200.140:8500/v1/catalog/nodes
[
{
"ID": "1b60a5bd-9a17-ff18-3a65-0ff95b3a836a",
"Node": "consul-test-v2-fffd475bc-st4mv",
"Address": "127.0.0.1",
"Datacenter": "dc1",
"TaggedAddresses": {
"lan": "127.0.0.1",
"wan": "127.0.0.1"
},
"Meta": {
"consul-network-segment": ""
},
"CreateIndex": 5,
"ModifyIndex": 6
}
]
I have the above mentioned issue when curl is done on the service ClusterIP:ClusterPort
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
consul-test ClusterIP 10.97.200.140 <none> 8500/TCP 9m
$ curl -L http://10.97.200.140:8500/v1/catalog/nodes
But LoadBalancing works as expected when curl is done on INGRESS_HOST and INGRESS_PORT ( determining INGRESS_HOST and INGRESS_PORT present here )
$ curl -L http://$INGRESS_HOST:$INGRESS_PORT/v1/catalog/nodes --- WORKS