I'm learning K8s, so bear with me as a noob.
I'm running a single-node K3s cluster at home, and have successfully deployed the traefik/whoami application using the command below, but would like to deploy it via ArgoCD.
cat apps/whoami/whoami.yaml | envsubst | kubectl apply -f -
The manifest I created.
---
apiVersion: v1
kind: Namespace
metadata:
name: k3s-test
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: whoami-deploy
namespace: k3s-test
labels:
app: whoami
spec:
replicas: 3
selector:
matchLabels:
app: whoami
template:
metadata:
labels:
app: whoami
spec:
containers:
- name: whoami
image: traefik/whoami:v1.8.0
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: whoami-svc
namespace: k3s-test
labels:
service: whoami
spec:
type: ClusterIP
ports:
- name: http
port: 80
protocol: TCP
selector:
app: whoami
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: whoami-ingress
namespace: k3s-test
annotations:
traefik.ingress.kubernetes.io/router.entrypoints: web
spec:
rules:
- host: whoami.${DOMAIN_NAME}
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: whoami-svc
port:
number: 80
I want to publish my code to GitHub so that ArgoCD can sync it, but don't want to expose information that is not necessarily secret, but not necessarily public. Currently, my domain name is set as an environment variable (because I don't want to commit mydomain.com to my GitHub repo) and I'm using envsubst when running kubectl apply. Does ArgoCD have similar functionality? I found this GitHub issue showing ArgoCD probably doesn't have variable interpolation, but is there an alternative? Or do I need to store my domain name as a full-on K8s secret?
Related
I have ran into an issue.
My goal: Create multiple nginx deployments using the same "template" file and use kustomize to replace the container name. This is just an example, as in the next steps I will add/replace/remove lines (for eg.: resources) from "nginx_temaplate.yml" for different deployments. For now I want to make the patches work to create multiple deployments :-) I am not even sure, if the structure is correct.
The structure is:
base/nginx_template.yml
base/kustomization.yml
base/apps/nginx1/nginx1.yml
base/apps/nginx2/nginx2.yml
base/nginx_template.yml:
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: nginx
spec:
strategy:
type: Recreate
selector:
matchLabels:
app: nginx
replicas: 1 # tells deployment to run 1 pods matching the template
template: # create pods using pod definition in this template
metadata:
labels:
app: nginx
spec:
containers:
- name: template
image: nginx
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx
namespace: default
labels:
app: nginx
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
spec:
externalTrafficPolicy: Local
ports:
- name: http
port: 80
protocol: TCP
targetPort: 80
selector:
app: nginx
type: LoadBalancer
base/kustomization.yml:
resources:
- nginx_template.yml
patches:
- path: ./apps/nginx1/nginx1.yml
target:
kind: Deployment
- path: ./apps/nginx2/nginx2.yml
target:
kind: Deployment
base/apps/nginx1/nginx1.yml:
- op: replace
path: /spec/template/spec/containers/0/name
value: nginx-1
base/apps/nginx2/nginx2.yml:
- op: replace
path: /spec/template/spec/containers/0/name
value: nginx-2
All it does now is, that it only creates the nginx-2. Thank you for any help.
I have been experimenting with network policies, and now pods can no longer communicate with each other though I have deleted all the policies.
Namespace
apiVersion: v1
kind: Namespace
metadata:
name: staging
labels:
env: staging
Service A
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: service-a
namespace: staging
spec:
replicas: 1
selector:
matchLabels:
app: service-a
template:
metadata:
labels:
app: service-a
spec:
containers:
- name: service-a
image: busybox:1.33.1
command: ["nc", "-lkv", "-p", "8080", "-e", "/bin/sh"]
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: service-a
namespace: staging
spec:
type: ClusterIP
selector:
app: service-a
ports:
- port: 8080
Service B
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: service-b
namespace: staging
spec:
replicas: 1
selector:
matchLabels:
app: service-b
template:
metadata:
labels:
app: service-b
spec:
containers:
- name: service-b
image: busybox:1.33.1
command: ["nc", "-lkv", "-p", "8080", "-e", "/bin/sh"]
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: service-b
namespace: staging
spec:
type: ClusterIP
selector:
app: service-b
ports:
- port: 8080
Testing Communication
kubectl -n staging exec service-a-7c66d7cdf8-72gqq -- nc -vz service-b
Expected behaviour is that they can contact each other, but instead there is a timeout. So I tjeck if there are any network policies left.
kubectl -n staging get networkpolicy
>No resources found in staging namespace.
What I have tried
I have deleted the namespace, recreated it and recreated the two services.
I have gone through all namespaces looking for network policies to delete them, but there are none!
Before I started experimenting with the networkspolicies everything worked fine, but now I cannot get things working again. For the network controller I am using cillum.
I am pretty dumb, I simply forgot to write the port the second time around. It should be:
kubectl -n staging exec service-a-7c66d7cdf8-72gqq -- nc -vz service-b 8080
Hi I don't know whether this is an issue that I am having since I don't have a lot of experience in Kubernetes.
I am trying a deployment in Kubernetes but the logic behind the container it is that it start initialising it self instantly and calling a backend endpoint to register a user.
From the screenshot below I don't know whether this is an misconfiguration about my deployment but I think that the POD starts multiple containers and this breaks my deployment since the flow goes like this:
Container starts
Calls backend endpoint and register user
Kubernetes starts another container
Calls backend endpoint and fails to register user because its already been registered
POD fails to deploy
Screenshot
As you can see in the screenshot the container count is 5
Adding as well the deployment yaml file
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: vulos-hyperledger-scanner
namespace: vuloss-scanner
spec:
replicas: 1
selector:
matchLabels:
app: vulos-hyperledger-scanner
template:
metadata:
labels:
app: vulos-hyperledger-scanner
spec:
containers:
- name: vulos-hyperledger-scanner
image: registry.digitalocean.com/notarised/vulos-hyperledger-scanner:1.6.5
ports:
- containerPort: 8080
env:
imagePullPolicy: Always
imagePullSecrets:
- name: do-registry
---
apiVersion: v1
kind: Service
metadata:
name: vulos-hyperledger-scanner
namespace: vuloss-scanner
spec:
selector:
app: vulos-hyperledger-scanner
ports:
- port: 80
targetPort: 8080
type: ClusterIP
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
cert-manager.io/cluster-issuer: "letsencrypt-production"
name: vulos-hyperledger-scanner
namespace: vuloss-scanner
spec:
rules:
- host: explorer.vulos.io
http:
paths:
- backend:
serviceName: vulos-hyperledger-scanner
servicePort: 80
path: /
#This section is only required if TLS is to be enabled for the Ingress
tls:
- hosts:
- explorer.vulos.io
secretName: explorer-tls
I don't know whether this is an issue in my deployment and whether this statement of above makes sense in Kubernetes and whether I can make the POD to start only 1 container rather than multiple
Thank you
I have a simple containerised web app deployed to AKS, all works fine. Next step I'd like to take is to deploy X-instances same web app, each on dedicated URLs, differentiated by path or ports. This is so I can run automated tests against this web app, each test on a dedicated URL (relying on sticky sessions won't work)
It does not seem like replicas can do it, as they are not addressable with a dedicated URL/port. And following many tutorials on web, I'm struggling to come up with a yaml definition that would achieve this "elegantly" (i.e. in one Deployment)
Any ideas?
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-name
namespace: namespace-name
spec:
selector:
matchLabels:
app: app-name
template:
metadata:
labels:
app: app-name
tier: frontend
spec:
containers:
- name: app-name
image: image_xyz
imagePullPolicy: Always
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: app-name-service
namespace: namespace-name
labels:
app: app-name
tier: frontend
spec:
type: ClusterIP
ports:
- port: 80
selector:
app: app-name
tier: frontend
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: app-name-ingress
namespace: namespace-name
spec:
rules:
- host: app.host.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: app-name-service
port:
number: 80
I have to deploy on my kubernetes cluster two deployments that use the same service for communicate but the two deployments are located into two differents namespaces:
apiVersion: apps/v1
kind: Deployment
metadata:
name: app1
namespace: namespace1
labels:
app: app1
spec:
replicas: 2
selector:
matchLabels:
app: app1
template:
metadata:
labels:
app: app1
spec:
containers:
- name: app1
image: eu.gcr.io/direct-variety-20998876/test1:dev
resources:
requests:
cpu: "100m"
memory: "128Mi"
ports:
- containerPort: 8000
imagePullPolicy: Always
env:
...
and an identical second but in another amespace:
apiVersion: apps/v1
kind: Deployment
metadata:
name: app2
namespace: namespace2
labels:
app: app2
spec:
replicas: 2
selector:
matchLabels:
app: app2
template:
metadata:
labels:
app: app2
spec:
containers:
- name: app2
image: eu.gcr.io/direct-variety-20998876/test1:prod
resources:
requests:
cpu: "100m"
memory: "128Mi"
ports:
- containerPort: 8000
imagePullPolicy: Always
env:
...
so i have to create a common service for bot deployment that run over the two namespaces:
I try:
kind: Service
apiVersion: v1
metadata:
name: apps-service
namespace: ???
spec:
selector:
app: ???
ports:
- protocol: TCP
port: 8000
targetPort: 8000
type: NodePort
Until now i create one service for any app in specific namespace but there is a method for create a single service for manage both deployment (and then associate an unique ingress)?
So many thanks in advance
First, I would like to provide some general explanations.
As we can see in the Ingress documentation:
You must have an Ingress controller to satisfy an Ingress. Only creating an Ingress resource has no effect.
Ingress Controller can be deployed in any namespace and is often deployed in a namespace separate from the application namespace.
Ingress resource (Ingress rules) should be deployed in the same namespace as the services they point to.
It is possible to have one ingress controller for multiple ingress resources.
Deploying an Ingress resource in the same namespace as the Services it points to is the most common approach (I recommend this approach).
However, there is way to have Ingress in one namespace and Services in another namespaces using externalName Services.
I will create an example to illustrate how it may work.
Suppose, I have two Deployments (app1, app2) deployed in two different Namespaces (namespace1, namespace2):
$ cat app1.yml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: app1
name: app1
namespace: namespace1
spec:
selector:
matchLabels:
app: app1
template:
metadata:
labels:
app: app1
spec:
containers:
- image: nginx
name: nginx
$ cat app2.yml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: app2
name: app2
namespace: namespace2
spec:
selector:
matchLabels:
app: app2
template:
metadata:
labels:
app: app2
spec:
containers:
- image: nginx
name: nginx
And I exposed these Deployments with ClusterIP Services:
$ cat svc-app1.yml
apiVersion: v1
kind: Service
metadata:
labels:
app: app1
name: app1
namespace: namespace1
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: app1
$ cat svc-app2.yml
apiVersion: v1
kind: Service
metadata:
labels:
app: app2
name: app2
namespace: namespace2
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: app2
We want to have a single Ingress resource in a separate Namespace (default).
First, we need to deploy Services of type ExternalName that map a Service to a DNS name.
$ cat external-app1.yml
kind: Service
apiVersion: v1
metadata:
name: external-app1
spec:
type: ExternalName
externalName: app1.namespace1.svc
$ cat external-app2.yml
kind: Service
apiVersion: v1
metadata:
name: external-app2
spec:
type: ExternalName
externalName: app2.namespace2.svc
Then we can deploy Ingress resource:
$ cat ingress.yml
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
name: app-ingress
spec:
rules:
- http:
paths:
- path: /app1
backend:
serviceName: external-app1
servicePort: 80
- path: /app2
backend:
serviceName: external-app2
servicePort: 80
$ kubectl apply -f ingress.yml
ingress.networking.k8s.io/app-ingress created
Finally, we can check if it works as expected:
$ curl 34.118.X.207/app1
app1
$ curl 34.118.X.207/app2
app2
NOTE: This is a workaround and may work differently with different ingress controllers. It is ususally better to have two or more Ingress resources in different namespaces.