I want to deploy a SW application with docker and kubernetes and I have a big issue.
I have master node and worker node, inside, I have a Python application running on port 5000 with his service.
I want to take outside my app and I'm ussing ingress. When I make curl to nginx deployment and nginx service y can get response, but when I curl to ingress I can read connection refused.
Thank u so much
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
io.kompose.service: nginx
name: nginx
namespace: lazy-trading
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: nginx
strategy: {}
template:
metadata:
labels:
io.kompose.service: nginx
spec:
containers:
- image: nginx:1.17-alpine
imagePullPolicy: IfNotPresent
name: nginx
ports:
- containerPort: 80
resources: {}
volumeMounts:
- mountPath: /etc/nginx/conf.d
readOnly: true
name: nginx-conf
volumes:
- name: nginx-conf
configMap:
name: nginx-conf
items:
- key: nginx.conf
path: nginx.conf
restartPolicy: Always
status: {}
---
apiVersion: v1
kind: Service
metadata:
labels:
io.kompose.service: nginx
name: nginx
namespace: lazy-trading
spec:
ports:
- name: "8094"
port: 8094
targetPort: 80
selector:
io.kompose.service: nginx
status:
loadBalancer: {}
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ingress-nginx
namespace: lazy-trading
spec:
rules:
- host: lazytrading.local
http:
paths:
- path: /
backend:
serviceName: nginx
servicePort: 8094
---
apiVersion: v1
kind: ConfigMap
metadata:
labels:
io.kompose.service: nginx
name: nginx-conf
namespace: lazy-trading
data:
nginx.conf: |
server {
# Lazt Ytading configuration ---
location = /api/v1/lazytrading {
return 302 /api/v1/lazytrading/;
}
location /api/v1/lazytrading/ {
proxy_pass http://{{ .Values.deployment.name }}:{{
.Values.service.ports.port }}/;
}
}
Related
Flask deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: flask-api
spec:
replicas: 1
selector:
matchLabels:
app: flask-api
template:
metadata:
labels:
app: flask-api
spec:
containers:
- name: flask-api-container
image: umarrafaqat/flask-app:latest
imagePullPolicy: Always
ports:
- containerPort: 5000
protocol: TCP
----
apiVersion: v1
kind: Service
metadata:
name: flask-app-service
spec:
type: ClusterIP
ports:
- port: 5000
selector:
app: flask-api
React deployments
apiVersion: apps/v1
kind: Deployment
metadata:
name: react-app
spec:
replicas: 1
selector:
matchLabels:
app: react-app
template:
metadata:
labels:
app: react-app
spec:
containers:
- name: react-app-container
image: umarrafaqat/react-app:latest
imagePullPolicy: Always
ports:
- containerPort: 3000
protocol: TCP
----
apiVersion: v1
kind: Service
metadata:
name: react-app-service
spec:
ports:
- port: 3000
selector:
app: react-app
Ingresss
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: react-app-ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "true"
spec:
rules:
- host: localhost
http:
paths:
- backend:
service:
name: react-app-service
port:
number: 3000
path: /
pathType: Prefix
I want to access this app on a local host but cannot do so. I am running it on minikube
I have different Kubernetes deployment in GKE and I would like to access them from different external subdomains.
I tried to create 2 deployments with subdomain "sub1" and "sub2" and hostname "app" another deployment with hostname "app" and a service that expose it on the IP XXX.XXX.XXX.XXX configured on the DNS of app.mydomain.com
I would like to access the 2 child deployment from sub1.app.mydomain.com and sub2.app.mydomain.com
This should be automatic, adding new deployment I cannot change every time the DNS records.
Maybe I'm approaching the problem in the wrong way, I'm new in GKE, any suggestions?
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-host
spec:
replicas: 1
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
name: my-host
type: proxy
spec:
hostname: app
containers:
- image: nginx:alpine
name: nginx
ports:
- name: nginx
containerPort: 80
hostPort: 80
restartPolicy: Always
status: {}
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-subdomain-1
spec:
replicas: 1
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
name: my-subdomain-1
type: app
spec:
hostname: app
subdomain: sub1
containers:
- image: nginx:alpine
name: nginx
ports:
- name: nginx
containerPort: 80
hostPort: 80
restartPolicy: Always
status: {}
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-subdomain-2
spec:
replicas: 1
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
name: my-subdomain-2
type: app
spec:
hostname: app
subdomain: sub2
containers:
- image: nginx:alpine
name: nginx
ports:
- name: nginx
containerPort: 80
hostPort: 80
restartPolicy: Always
status: {}
---
apiVersion: v1
kind: Service
metadata:
name: my-expose-dns
spec:
ports:
- port: 80
selector:
name: my-host
type: LoadBalancer
You want Ingress. There are several options available (Istio, nginx, traefik, etc). I like using nginx and it's really easy to install and work with. Installation steps can be found at kubernetes.github.io.
Once the Ingress Controller is installed, you want to make sure you've exposed it with a Service with type=LoadBalancer. Next, if you are using Google Cloud DNS, set up a wildcard entry for your domain with an A record pointing to the external IP address of your Ingress Controller's Service. In your case, it would be *.app.mydomain.com.
So now all of your traffic to app.mydomain.com is going to that load balancer and being handled by your Ingress Controller, so now you need to add Service and Ingress Entities for any service you want.
apiVersion: v1
kind: Service
metadata:
name: my-service1
spec:
selector:
app: my-app-1
ports:
- protocol: TCP
port: 80
targetPort: 80
type: ClusterIP
apiVersion: v1
kind: Service
metadata:
name: my-service2
spec:
selector:
app: my-app2
ports:
- protocol: TCP
port: 80
targetPort: 80
type: ClusterIP
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: name-virtual-host-ingress
spec:
rules:
- host: sub1.app.mydomain.com
http:
paths:
- backend:
serviceName: my-service1
servicePort: 80
- host: sub2.app.mydomain.com
http:
paths:
- backend:
serviceName: my-service2
servicePort: 80
Routing shown is host based, but you just as easily could have handled those services as path based, so all traffic to app.mydomain.com/service1 would go to one of your deployments.
SOLVED!
This is the correct nginx configuration:
server {
listen 80;
server_name ~^(?<subdomain>.*?)\.;
resolver kube-dns.kube-system.svc.cluster.local valid=5s;
location / {
proxy_pass http://$subdomain.my-internal-host.default.svc.cluster.local;
root /usr/share/nginx/html;
index index.html index.htm;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
It could be a solution, for my case I need something more dynamic. I would not update the ingress each time I add a subdomain.
I've almost solved using an nginx proxy like this:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-subdomain-1
spec:
replicas: 1
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
name: my-subdomain-1
type: app
spec:
hostname: sub1
subdomain: my-internal-host
containers:
- image: nginx:alpine
name: nginx
ports:
- name: nginx
containerPort: 80
hostPort: 80
restartPolicy: Always
status: {}
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-subdomain-2
spec:
replicas: 1
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
name: my-subdomain-2
type: app
spec:
hostname: sub2
subdomain: my-internal-host
containers:
- image: nginx:alpine
name: nginx
ports:
- name: nginx
containerPort: 80
hostPort: 80
restartPolicy: Always
status: {}
---
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-config-dns-file
data:
nginx.conf: |
server {
listen 80;
server_name ~^(?.*?)\.;
location / {
proxy_pass http://$subdomain.my-internal-host;
root /usr/share/nginx/html;
index index.html index.htm;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-proxy
spec:
replicas: 1
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
name: my-proxy
type: app
spec:
subdomain: my-internal-host
containers:
- image: nginx:alpine
name: nginx
volumeMounts:
- name: nginx-config-dns-file
mountPath: /etc/nginx/conf.d/default.conf.test
subPath: nginx.conf
ports:
- name: nginx
containerPort: 80
hostPort: 80
volumes:
- name: nginx-config-dns-file
configMap:
name: nginx-config-dns-file
restartPolicy: Always
status: {}
---
apiVersion: v1
kind: Service
metadata:
name: my-internal-host
spec:
selector:
type: app
clusterIP: None
ports:
- name: sk-port
port: 80
targetPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: sk-expose-dns
spec:
ports:
- port: 80
selector:
name: my-proxy
type: LoadBalancer
I did understand that I need the service 'my-internal-host' to allow all the deployments to see each other internally.
The problem now is only the proxy_pass of nginx, if I change it with 'proxy_pass http://sub1.my-internal-host;' it works, but not with the regexp var.
The problem is related to the nginx resolver.
I am working with Kubernetes on Google Cloud. I am trying to set Traefik as Ingress for the cluster. I'm based the code on the official site docs https://docs.traefik.io/user-guide/kubernetes/ but I have an error with the rule for Path Prefix Strip.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: auth-api
labels:
app: auth-api
spec:
replicas: 2
selector:
matchLabels:
app: auth-api
template:
metadata:
labels:
app: auth-api
version: v0.0.1
spec:
containers:
- name: auth-api
image: gcr.io/r10c-dev/auth-api:v0.1
ports:
- containerPort: 3000
env:
- name: AMQP_SERVICE
value: broker:5672
- name: CACHE_SERVICE
value: session-cache
---
apiVersion: v1
kind: Service
metadata:
name: auth-api
spec:
ports:
- name: http
targetPort: 80
port: 3000
type: NodePort
selector:
app: auth-api
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: main-ingress
annotations:
kubernetes.io/ingress.class: traefik
traefik.frontend.rule.type: PathPrefixStrip
kubernetes.io/ingress.global-static-ip-name: "web-static-ip"
spec:
rules:
- http:
paths:
- path: /auth
backend:
serviceName: auth-api
servicePort: http
In the GKE console it seems the deployment is linked to the service and the ingress, but when I try to access the IP, the server returns and error 502.
Also I am using and static IP
gcloud compute addresses create web-static-ip --global
I've setup Kubernetes to use the Traefik Ingress to provide name based routing. I am a little lost in terms of how to configure for the automatic LetsEncrypt SSL certs. How do I reference the TOML files and configure for HTTPs. I am using a simple container below with the NGINX image to test this.
The below is my YAML for the deployment/service/ingress.
apiVersion: v1
kind: Service
metadata:
name: web
labels:
app: hmweb
spec:
ports:
- name: http
port: 80
targetPort: 80
protocol: TCP
selector:
app: hmweb
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: web-ingress
annotations:
kubernetes.io/ingress.class: traefik
spec:
rules:
- host: example.com
http:
paths:
- path: /
backend:
serviceName: web
servicePort: http
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: hmweb-deployment
labels:
app: hmweb
spec:
replicas: 1
selector:
matchLabels:
app: hmweb
template:
metadata:
labels:
app: hmweb
spec:
containers:
- name: hmweb
image: nginx:latest
envFrom:
- configMapRef:
name: config
ports:
- containerPort: 80
I have also included my ingress.yaml
--
apiVersion: v1
kind: ServiceAccount
metadata:
name: traefik-ingress-controller
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: traefik-ingress-controller
namespace: kube-system
labels:
k8s-app: traefik-ingress-lb
spec:
replicas: 1
selector:
matchLabels:
k8s-app: traefik-ingress-lb
template:
metadata:
labels:
k8s-app: traefik-ingress-lb
name: traefik-ingress-lb
spec:
serviceAccountName: traefik-ingress-controller
terminationGracePeriodSeconds: 60
containers:
- image: traefik
name: traefik-ingress-lb
ports:
- name: http
containerPort: 80
- name: admin
containerPort: 8080
args:
- --api
- --kubernetes
- --logLevel=INFO
---
kind: Service
apiVersion: v1
metadata:
name: traefik-ingress-service
namespace: kube-system
spec:
selector:
k8s-app: traefik-ingress-lb
ports:
- protocol: TCP
port: 80
name: web
- protocol: TCP
port: 8080
name: admin
type: LoadBalancer
You could build a custom image and include the toml file that way, however that would NOT be best practice. Here's how I did it:
1) Deploy your toml configuration to kubernetes as a ConfigMap like so:
apiVersion: v1
kind: ConfigMap
metadata:
name: cfg-traefik
labels:
app: traefik
data:
traefik.toml: |
defaultEntryPoints = ["http", "https"]
[entryPoints]
[entryPoints.http]
address = ":80"
[entryPoints.http.redirect]
entryPoint = "https"
[entryPoints.https]
address = ":443"
[entryPoints.https.tls]
[acme]
email = "you#email.com"
storage = "/storage/acme.json"
entryPoint = "https"
acmeLogging = true
onHostRule = true
[acme.tlsChallenge]
2) Connect the configuration to your Traefik deployment. Here's my configuration:
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: dpl-traefik
labels:
k8s-app: traefik
spec:
replicas: 1
selector:
matchLabels:
k8s-app: traefik
template:
metadata:
labels:
k8s-app: traefik
name: traefik
spec:
serviceAccountName: svc-traefik
terminationGracePeriodSeconds: 60
volumes:
- name: config
configMap:
name: cfg-traefik
- name: cert-storage
persistentVolumeClaim:
claimName: pvc-traefik
containers:
- image: traefik:alpine
name: traefik
volumeMounts:
- mountPath: "/config"
name: "config"
- mountPath: "/storage"
name: cert-storage
ports:
- name: http
containerPort: 80
- name: https
containerPort: 443
- name: admin
containerPort: 8080
args:
- --api
- --kubernetes
- --logLevel=INFO
- --configFile=/config/traefik.toml
I have the following hello world deployment.
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: hello-deployment
spec:
replicas: 1
template:
metadata:
labels:
app: hello
spec:
containers:
- name: hello
image: hello:v0.0.1
imagePullPolicy: Always
args:
- /hello
ports:
- containerPort: 3000
---
apiVersion: v1
kind: Service
metadata:
name: hello-service
spec:
ports:
- port: 80
targetPort: 3000
protocol: TCP
selector:
app: hello
type: NodePort
And I have ingress object deploy with side-car container
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: alb-ingress-controller
name: alb-ingress-controller
namespace: kube-system
spec:
replicas: 1
selector:
matchLabels:
app: alb-ingress-controller
template:
metadata:
creationTimestamp: null
labels:
app: alb-ingress-controller
spec:
containers:
- name: server
image: alb-ingress-controller:v0.0.1
imagePullPolicy: Always
args:
- /server
- --ingress-class=alb
- --cluster-name=AAA
- --aws-max-retries=20
- --healthz-port=10254
ports:
- containerPort: 10254
protocol: TCP
- name: alb-sidecar
image: sidecar:v0.0.1
imagePullPolicy: Always
args:
- /sidecar
- --port=5000
ports:
- containerPort: 5000
protocol: TCP
dnsPolicy: ClusterFirst
restartPolicy: Always
securityContext: {}
terminationGracePeriodSeconds: 30
serviceAccountName: alb-ingress
serviceAccount: alb-ingress
---
apiVersion: v1
kind: Service
metadata:
name: alb-ingress-controller-service
spec:
ports:
- port: 80
targetPort: 5000
protocol: TCP
selector:
app: alb-ingress-controller
type: NodePort
And I have Ingress here
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-alb
annotations:
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP":80,"HTTPS": 443}]'
alb.ingress.kubernetes.io/subnets: AAA
alb.ingress.kubernetes.io/security-groups: AAA
labels:
app: test-alb
spec:
rules:
- http:
paths:
- path: /hello
backend:
serviceName: hello-service
servicePort: 80
- path: /alb-sidecar
backend:
serviceName: alb-ingress-controller-service
servicePort: 80
I would expect to access to /alb-sidecar the same way that I access to /hello, but only /hello endpoint works for me. And keep getting 502 Bad Gateway for /alb-sidecar endpoint. The sidecar container is just a simple web app listening on /alb-sidecar.
Do I need do anything different when the sidecar container runs in a different namespace or how would you run a sidecar next to ALB ingress controller?
If you created the deployment alb-ingress-controller and the service alb-ingress-controller-service in another namespace, you need to create another ingress resource in the exact namespace.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-alb
namespace: alb-namespace
annotations:
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP":80,"HTTPS": 443}]'
alb.ingress.kubernetes.io/subnets: AAA
alb.ingress.kubernetes.io/security-groups: AAA
labels:
app: alb-service
spec:
rules:
- http:
paths:
- path: /alb-sidecar
backend:
serviceName: alb-ingress-controller-service
servicePort: 80