Kubernetes deploy multiple instances of same app on dedicated URLs - kubernetes

I have a simple containerised web app deployed to AKS, all works fine. Next step I'd like to take is to deploy X-instances same web app, each on dedicated URLs, differentiated by path or ports. This is so I can run automated tests against this web app, each test on a dedicated URL (relying on sticky sessions won't work)
It does not seem like replicas can do it, as they are not addressable with a dedicated URL/port. And following many tutorials on web, I'm struggling to come up with a yaml definition that would achieve this "elegantly" (i.e. in one Deployment)
Any ideas?
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-name
namespace: namespace-name
spec:
selector:
matchLabels:
app: app-name
template:
metadata:
labels:
app: app-name
tier: frontend
spec:
containers:
- name: app-name
image: image_xyz
imagePullPolicy: Always
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: app-name-service
namespace: namespace-name
labels:
app: app-name
tier: frontend
spec:
type: ClusterIP
ports:
- port: 80
selector:
app: app-name
tier: frontend
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: app-name-ingress
namespace: namespace-name
spec:
rules:
- host: app.host.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: app-name-service
port:
number: 80

Related

ArgoCD: How to not store semi-secret information in manifests?

I'm learning K8s, so bear with me as a noob.
I'm running a single-node K3s cluster at home, and have successfully deployed the traefik/whoami application using the command below, but would like to deploy it via ArgoCD.
cat apps/whoami/whoami.yaml | envsubst | kubectl apply -f -
The manifest I created.
---
apiVersion: v1
kind: Namespace
metadata:
name: k3s-test
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: whoami-deploy
namespace: k3s-test
labels:
app: whoami
spec:
replicas: 3
selector:
matchLabels:
app: whoami
template:
metadata:
labels:
app: whoami
spec:
containers:
- name: whoami
image: traefik/whoami:v1.8.0
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: whoami-svc
namespace: k3s-test
labels:
service: whoami
spec:
type: ClusterIP
ports:
- name: http
port: 80
protocol: TCP
selector:
app: whoami
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: whoami-ingress
namespace: k3s-test
annotations:
traefik.ingress.kubernetes.io/router.entrypoints: web
spec:
rules:
- host: whoami.${DOMAIN_NAME}
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: whoami-svc
port:
number: 80
I want to publish my code to GitHub so that ArgoCD can sync it, but don't want to expose information that is not necessarily secret, but not necessarily public. Currently, my domain name is set as an environment variable (because I don't want to commit mydomain.com to my GitHub repo) and I'm using envsubst when running kubectl apply. Does ArgoCD have similar functionality? I found this GitHub issue showing ArgoCD probably doesn't have variable interpolation, but is there an alternative? Or do I need to store my domain name as a full-on K8s secret?

POD start only 1 container

Hi I don't know whether this is an issue that I am having since I don't have a lot of experience in Kubernetes.
I am trying a deployment in Kubernetes but the logic behind the container it is that it start initialising it self instantly and calling a backend endpoint to register a user.
From the screenshot below I don't know whether this is an misconfiguration about my deployment but I think that the POD starts multiple containers and this breaks my deployment since the flow goes like this:
Container starts
Calls backend endpoint and register user
Kubernetes starts another container
Calls backend endpoint and fails to register user because its already been registered
POD fails to deploy
Screenshot
As you can see in the screenshot the container count is 5
Adding as well the deployment yaml file
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: vulos-hyperledger-scanner
namespace: vuloss-scanner
spec:
replicas: 1
selector:
matchLabels:
app: vulos-hyperledger-scanner
template:
metadata:
labels:
app: vulos-hyperledger-scanner
spec:
containers:
- name: vulos-hyperledger-scanner
image: registry.digitalocean.com/notarised/vulos-hyperledger-scanner:1.6.5
ports:
- containerPort: 8080
env:
imagePullPolicy: Always
imagePullSecrets:
- name: do-registry
---
apiVersion: v1
kind: Service
metadata:
name: vulos-hyperledger-scanner
namespace: vuloss-scanner
spec:
selector:
app: vulos-hyperledger-scanner
ports:
- port: 80
targetPort: 8080
type: ClusterIP
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
cert-manager.io/cluster-issuer: "letsencrypt-production"
name: vulos-hyperledger-scanner
namespace: vuloss-scanner
spec:
rules:
- host: explorer.vulos.io
http:
paths:
- backend:
serviceName: vulos-hyperledger-scanner
servicePort: 80
path: /
#This section is only required if TLS is to be enabled for the Ingress
tls:
- hosts:
- explorer.vulos.io
secretName: explorer-tls
I don't know whether this is an issue in my deployment and whether this statement of above makes sense in Kubernetes and whether I can make the POD to start only 1 container rather than multiple
Thank you

Expose services via Istio ingress gateway

I am new to istio and I want to expose three services and route traffic to those services based on the port number passed to "website.com:port" or subdomain.
services deployment config files:
apiVersion: v1
kind: Service
metadata:
name: visitor-service
labels:
app: visitor-service
spec:
ports:
- port: 8000
nodePort: 30800
targetPort: 8000
selector:
app: visitor-service
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: visitor-service
spec:
replicas: 1
selector:
matchLabels:
app: visitor-service
template:
metadata:
labels:
app: visitor-service
spec:
containers:
- name: visitor-service
image: visitor-service
ports:
- containerPort: 8000
second service:
apiVersion: v1
kind: Service
metadata:
name: auth-service
labels:
app: auth-service
spec:
ports:
- port: 3004
nodePort: 30304
targetPort: 3004
selector:
app: auth-service
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: auth-service
spec:
replicas: 1
selector:
matchLabels:
app: auth-service
template:
metadata:
labels:
app: auth-service
spec:
containers:
- name: auth-service
image: auth-service
ports:
- containerPort: 3004
Third one:
apiVersion: v1
kind: Service
metadata:
name: gateway
labels:
app: gateway
spec:
ports:
- port: 8080
nodePort: 30808
targetPort: 8080
selector:
app: gateway
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: gateway
spec:
replicas: 1
selector:
matchLabels:
app: gateway
template:
metadata:
labels:
app: gateway
spec:
containers:
- name: gateway
image: gateway
ports:
- containerPort: 8080
If someone can help setting up the gateway and virtual service configuration it would be great.
It seems like you simply want to expose your applications, for that reason istio seems like a total overkill since it comes with a lot of overhead that you won't be using.
Regardless of whether you want to use istio as your default ingress or any other ingress-controller (nginx, traefik, ...) the following construct applies to all of them:
Expose the ingress-controller via a service of type NodePort or LoadBalancer, depending on your infrastructure. In a cloud environment the latter one will most likely work the best for you (if on GKE, AKS, EKS, ...).
Once it is exposed set up a DNS A record to point to the external IP address. Afterwards you can start configuring your ingress, depending on which ingress-controller you chose the following YAML may need some adjustments (example is given for istio):
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: istio
name: ingress
spec:
rules:
- host: httpbin.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
serviceName: httpbin
servicePort: 8000
If a request for something like httpbin.example.com comes in to your ingress-controller it is going to send the request to a service named httpbin on port 8000.
As can be seen in the YAML posted above, the rules and paths field take a list (indicated by the - in the next line). To expose multiple services simply add a new entry to the list, e.g.:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: istio
name: ingress
spec:
rules:
- host: httpbin.example.com
http:
paths:
- path: /httpbin
pathType: Prefix
backend:
serviceName: httpbin
servicePort: 8000
- path: /apache
pathType: Prefix
backend:
serviceName: apache
servicePort: 8080
This is going to send requests like httpbin.example.com/httpbin/ to httpbin and httpbin.example.com/apache/ to apache.
For further information see:
https://istio.io/latest/docs/tasks/traffic-management/ingress/kubernetes-ingress/
https://kubernetes.io/docs/concepts/services-networking/ingress/

Kubernetes Ingress does not work with traefisk

I created a kubernetes cluster in Google Cloud Platform, after that, I installed Helm/tiller on cluster, and after, I installed traefik with helm like oficial documentation says to do.
Now i'm trying to create an Ingress for a service, but if I put the annotation kubernetes.io/ingress.class: traefik, the load balancer for Ingress is not created.
But without the annotation, it works with default Ingress.
(The service type is nodeport)
EDIT: I also tried this example in a clean google cloud kubernetes cluster: https://supergiant.io/blog/using-traefik-as-ingress-controller-for-your-kubernetes-cluster/ but its the same, when I chose kubernetes.io/ingress.class: traefik, won't be created a load balancer for ingress.
my files are:
animals-svc.yaml:
---
apiVersion: v1
kind: Service
metadata:
name: bear
spec:
type: NodePort
ports:
- name: http
targetPort: 80
port: 80
selector:
app: animals
task: bear
---
apiVersion: v1
kind: Service
metadata:
name: moose
spec:
type: NodePort
ports:
- name: http
targetPort: 80
port: 80
selector:
app: animals
task: moose
---
apiVersion: v1
kind: Service
metadata:
name: hare
annotations:
traefik.backend.circuitbreaker: "NetworkErrorRatio() > 0.5"
spec:
type: NodePort
ports:
- name: http
targetPort: 80
port: 80
selector:
app: animals
task: hare
animals-ingress.yaml:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: animals
annotations:
kubernetes.io/ingress.class: traefik
# kubernetes.io/ingress.global-static-ip-name: "my-reserved-global-ip"
# traefik.ingress.kubernetes.io/frontend-entry-points: http
# traefik.ingress.kubernetes.io/redirect-entry-point: http
# traefik.ingress.kubernetes.io/redirect-permanent: "true"
spec:
rules:
- host: hare.minikube
http:
paths:
- path: /
backend:
serviceName: hare
servicePort: http
- host: bear.minikube
http:
paths:
- path: /
backend:
serviceName: bear
servicePort: http
- host: moose.minikube
http:
paths:
- path: /
backend:
serviceName: moose
servicePort: http
animals-deployment.yaml:
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: bear
labels:
app: animals
animal: bear
spec:
replicas: 2
selector:
matchLabels:
app: animals
task: bear
template:
metadata:
labels:
app: animals
task: bear
version: v0.0.1
spec:
containers:
- name: bear
image: supergiantkir/animals:bear
ports:
- containerPort: 80
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: moose
labels:
app: animals
animal: moose
spec:
replicas: 2
selector:
matchLabels:
app: animals
task: moose
template:
metadata:
labels:
app: animals
task: moose
version: v0.0.1
spec:
containers:
- name: moose
image: supergiantkir/animals:moose
ports:
- containerPort: 80
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: hare
labels:
app: animals
animal: hare
spec:
replicas: 2
selector:
matchLabels:
app: animals
task: hare
template:
metadata:
labels:
app: animals
task: hare
version: v0.0.1
spec:
containers:
- name: hare
image: supergiantkir/animals:hare
ports:
- containerPort: 80
The services are created, but the ingress loadbalancer is not created:
But, if I remove the line kubernetes.io/ingress.class: traefik it works with the default ingress of Kubernetes
Traefik does not create a load balancer for you by default.
As HTTP(s) load balancing with Ingress documentation mention:
When you create an Ingress object, the GKE ingress controller creates
a Google Cloud Platform HTTP(S) load balancer and configures it
according to the information in the Ingress and its associated
Services.
This is all applicable for GKE ingress controller(gce) - more info about gce you can find here: https://github.com/kubernetes/ingress-gce
If you would like to use Traefik as ingress - you have to expose Traefik service with type: LoadBalancer
Example:
apiVersion: v1
kind: Service
metadata:
name: traefik
spec:
type: LoadBalancer
selector:
k8s-app: traefik-ingress-lb
ports:
- port: 80
targetPort: 80
More info with a lot of explanation diagrams and real working example you can find in the Exposing Kubernetes Services to the internet using Traefik Ingress Controller article.
Hope this help.
You can try to add more annotations as below
traefik.ingress.kubernetes.io/frontend-entry-points: http,https
traefik.ingress.kubernetes.io/redirect-entry-point: https
traefik.ingress.kubernetes.io/redirect-permanent: "true"
Like this,
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: traefik-dashboard-ingress
namespace: traefik
annotations:
kubernetes.io/ingress.class: traefik
traefik.ingress.kubernetes.io/frontend-entry-points: http,https
traefik.ingress.kubernetes.io/redirect-entry-point: https
traefik.ingress.kubernetes.io/redirect-permanent: "true"
spec:
rules:
- host: traefik-ui.example.com
http:
paths:
- path: /
backend:
serviceName: traefik-dashboard
servicePort: 8080

Traefik Ingress bad rule

I am working with Kubernetes on Google Cloud. I am trying to set Traefik as Ingress for the cluster. I'm based the code on the official site docs https://docs.traefik.io/user-guide/kubernetes/ but I have an error with the rule for Path Prefix Strip.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: auth-api
labels:
app: auth-api
spec:
replicas: 2
selector:
matchLabels:
app: auth-api
template:
metadata:
labels:
app: auth-api
version: v0.0.1
spec:
containers:
- name: auth-api
image: gcr.io/r10c-dev/auth-api:v0.1
ports:
- containerPort: 3000
env:
- name: AMQP_SERVICE
value: broker:5672
- name: CACHE_SERVICE
value: session-cache
---
apiVersion: v1
kind: Service
metadata:
name: auth-api
spec:
ports:
- name: http
targetPort: 80
port: 3000
type: NodePort
selector:
app: auth-api
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: main-ingress
annotations:
kubernetes.io/ingress.class: traefik
traefik.frontend.rule.type: PathPrefixStrip
kubernetes.io/ingress.global-static-ip-name: "web-static-ip"
spec:
rules:
- http:
paths:
- path: /auth
backend:
serviceName: auth-api
servicePort: http
In the GKE console it seems the deployment is linked to the service and the ingress, but when I try to access the IP, the server returns and error 502.
Also I am using and static IP
gcloud compute addresses create web-static-ip --global