How to automatically deploy new image to k8s cluster as it was pushed to docker registry? - kubernetes

I have configured dockerhub to build a new image with tags latest and dev-<version> as new tag <version> is appeared in GitHub. I have no idea how to configure Tekton or any other cloud-native tool to automatically deploy new images as they become available at the registry.
Here's my k8s configuration:
apiVersion: v1
kind: List
items:
- apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-app-ingress
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: my-app-service
port:
number: 80
- apiVersion: v1
kind: Service
metadata:
name: my-app-service
spec:
ports:
- port: 80
targetPort: 8000
selector:
app: my-app
type: LoadBalancer
- apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app-local-deployment
labels:
app: my-app
type: web
spec:
replicas: 2
minReadySeconds: 15
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 25%
maxSurge: 1
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
imagePullSecrets:
- name: regcred
containers:
- name: backend
image: zuber93/my-app:dev-latest
imagePullPolicy: IfNotPresent
envFrom:
- secretRef:
name: my-app-local-secret
ports:
- containerPort: 8000
readinessProbe:
httpGet:
path: /flvby
port: 8000
initialDelaySeconds: 10
periodSeconds: 5
- name: celery
image: zuber93/my-app:dev-latest
imagePullPolicy: IfNotPresent
workingDir: /code
command: [ "/code/run/celery.sh" ]
envFrom:
- secretRef:
name: my-app-local-secret
- name: redis
image: redis:latest
imagePullPolicy: IfNotPresent

Short answer is:
Either setup a webhook from dockerhub (https://docs.docker.com/docker-hub/webhooks/) to tekton using triggers.
Or (depends on your security and if your cluster is reachable from www or not)
Poll dockerhub and trigger tekton upon new image detection.
(This can be done in many different ways, simple instant service, scheduled cronjob, etc in k8s)
So, you choose push or pull. ;)
I would ask "why not trigger from your git repo directly?"

Finally, I've explored keel.sh. It can handle appearing of new images and deploy them into the cluster.

Related

mongodb microservice k8 persistent volume claim not persisting data

I have several microservices, each one with its own mongodb deployment. I would like to start with getting my auth service working with a persistent volume. I have watched courses where postgresql is used and read a lot in the kubernetes docs but am having trouble getting this to work for mongodb.
When I run skaffold dev the PVC is created with no errors. kubectl shows the PVC is in Bound status, and running describe on the PVC shows my mongo deployment as the user.
However, when I visit my client service in the browser, I signup, logout, signin again with no problem and then if I restart skaffold so it deletes and recreates the containers my data is gone and I have to signup again.
Here are my files
auth-mongo-depl.yaml
# auth-mongo service base deployment configuration
apiVersion: apps/v1
kind: Deployment
metadata:
name: auth-mongo-depl
spec:
replicas: 1
selector:
matchLabels:
app: auth-mongo
template:
metadata:
labels:
app: auth-mongo
spec:
volumes:
- name: auth-mongo-data
persistentVolumeClaim:
claimName: auth-mongo-pvc
containers:
- name: auth-mongo
image: mongo
ports:
- containerPort: 27017
name: 'auth-mongo-port'
volumeMounts:
- name: auth-mongo-data
mountPath: '/data/db'
---
# ClusterIp Service
apiVersion: v1
kind: Service
metadata:
name: auth-mongo-ip-srv
spec:
selector:
app: auth-mongo
type: ClusterIP
ports:
- name: auth-mongo-db
protocol: TCP
port: 27017
targetPort: 27017
---
# Persistent Volume Claim
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: auth-mongo-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 500Mi
auth-depl.yaml
# auth service base deployment configuration
apiVersion: apps/v1
kind: Deployment
metadata:
name: auth-depl
spec:
replicas: 1
selector:
matchLabels:
app: auth
template:
metadata:
labels:
app: auth
spec:
containers:
- name: auth
image: isimmons33/ticketing-auth
env:
- name: MONGO_URI
value: 'mongodb://auth-mongo-ip-srv:27017/auth'
- name: JWT_KEY
valueFrom:
secretKeyRef:
name: jwt-secret
key: JWT_KEY
---
# ClusterIp Service
apiVersion: v1
kind: Service
metadata:
name: auth-ip-srv
spec:
selector:
app: auth
type: ClusterIP
ports:
- name: auth
protocol: TCP
port: 3000
targetPort: 3000
api/users portion of my ingress-srv.yaml
- path: /api/users/?(.*)
pathType: Prefix
backend:
service:
name: auth-ip-srv
port:
number: 3000
My client fires off a post request to /api/users/auth with which I can successfully signup or signin as long as I don't restart skaffold.
I even used kubectl to get a shell into my mongo deployment and queried to see the new user account there as it should be. But of course it is gone after restarting skaffold.
I am on Windows 10 but am running everything through WSL2 (Ubuntu)
Thanks for any help
It is highly recommended to use StatefulSets for running databases in Kubernetes. In Deployment if your pod crashes for some reason and creates new one, it's not guaranteed the pod will get patched to the same PV, hence the you loose the data.
Have a look on this https://kubernetes.io/blog/2017/01/running-mongodb-on-kubernetes-with-statefulsets
The solution as pointed out by raghu_manne was to use StatefulSets. But because the link posted is extremely old, here is the full solution that worked for me.
Also here is a youtube video I just found that explains StatefulSet and volumeClaimTemplates quite well.
How to run MongoDB with StatefulSets in Kubernetes
auth-mongo-depl.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: auth-mongo-depl
spec:
replicas: 1
serviceName: auth-mongo
selector:
matchLabels:
app: auth-mongo
template:
metadata:
labels:
app: auth-mongo
spec:
containers:
- name: auth-mongo
image: mongo
ports:
- containerPort: 27017
volumeMounts:
- name: auth-mongo-data
mountPath: /data/db
volumeClaimTemplates:
- metadata:
name: auth-mongo-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 500Mi
---
# ClusterIp Service
apiVersion: v1
kind: Service
metadata:
name: auth-mongo-ip-srv
spec:
selector:
app: auth-mongo
type: ClusterIP
ports:
- name: auth-mongo-db
protocol: TCP
port: 27017
targetPort: 27017

kubectl - Error response from daemon: error while creating mount source path

I'm trying to install SAP HANA Express docker image in a Kubernete node in Google Cloud Platform as per guide https://developers.sap.com/tutorials/hxe-k8s-advanced-analytics.html#7f5c99da-d511-479b-8745-caebfe996164 however, during execution of step 7 "Deploy your containers and connect to them" I'm not getting the expected result.
I'm executing command kubectl create -f hxe.yaml and here is the yaml file I'm using it:
kind: ConfigMap
apiVersion: v1
metadata:
creationTimestamp: 2018-01-18T19:14:38Z
name: hxe-pass
data:
password.json: |+
{"master_password" : "HXEHana1"}
---
kind: PersistentVolume
apiVersion: v1
metadata:
name: persistent-vol-hxe
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 150Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/data/hxe_pv"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: hxe-pvc
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 50Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: hxe
labels:
name: hxe
spec:
selector:
matchLabels:
run: hxe
app: hxe
role: master
tier: backend
replicas: 1
template:
metadata:
labels:
run: hxe
app: hxe
role: master
tier: backend
spec:
initContainers:
- name: install
image: busybox
command: [ 'sh', '-c', 'chown 12000:79 /hana/mounts' ]
volumeMounts:
- name: hxe-data
mountPath: /hana/mounts
volumes:
- name: hxe-data
persistentVolumeClaim:
claimName: hxe-pvc
- name: hxe-config
configMap:
name: hxe-pass
imagePullSecrets:
- name: docker-secret
containers:
- name: hxe-container
image: "store/saplabs/hanaexpress:2.00.045.00.20200121.1"
ports:
- containerPort: 39013
name: port1
- containerPort: 39015
name: port2
- containerPort: 39017
name: port3
- containerPort: 8090
name: port4
- containerPort: 39041
name: port5
- containerPort: 59013
name: port6
args: [ "--agree-to-sap-license", "--dont-check-system", "--passwords-url", "file:///hana/hxeconfig/password.json" ]
volumeMounts:
- name: hxe-data
mountPath: /hana/mounts
- name: hxe-config
mountPath: /hana/hxeconfig
- name: sqlpad-container
image: "sqlpad/sqlpad"
ports:
- containerPort: 3000
---
apiVersion: v1
kind: Service
metadata:
name: hxe-connect
labels:
app: hxe
spec:
type: LoadBalancer
ports:
- port: 39013
targetPort: 39013
name: port1
- port: 39015
targetPort: 39015
name: port2
- port: 39017
targetPort: 39017
name: port3
- port: 39041
targetPort: 39041
name: port5
selector:
app: hxe
---
apiVersion: v1
kind: Service
metadata:
name: sqlpad
labels:
app: hxe
spec:
type: LoadBalancer
ports:
- port: 3000
targetPort: 3000
protocol: TCP
name: sqlpad
selector:
app: hxe
I'm also using the last version of HANA Express Edition docker image: store/saplabs/hanaexpress:2.00.045.00.20200121.1 that you can see available here: https://hub.docker.com/_/sap-hana-express-edition/plans/f2dc436a-d851-4c22-a2ba-9de07db7a9ac?tab=instructions
The error I'm getting is the following:
Any thought on what could be wrong?
Best regards and happy new year for everybody.
Thanks to the Mahboob suggestion now I can start the pods (partially) and the issue is not poppin up in the "busybox" container starting stage. The problem was that I was using an Container-Optimized image for the node pool and the required one is Ubuntu. If you are facing a similar issue double check the image flavor you are choosing at the moment of node pool creation.
However, I have now a different issue, the pods are starting (both the hanaxs and the other for sqlpad), nevertheless one of them, the sqlpad container, is crashing at some point after starting and the pod gets stuck in CrashLoopBackOff state. As you can see in picture below, the pods are in CrashLoopBackOff state and only 1/2 started and suddenly both are running.
I'm not hitting the right spot to solve this problem since I'm a newcomer to the kubernetes and docker world. Hope some of you can bring some light to me.
Best regards.

Cant configure ingress on gcloud properly

I am trying to deploy a simple app on google cloud. I am testing the gitlab kluster integration.
Here is my yaml k8:
---
apiVersion: v1
kind: Service
metadata:
name: service
namespace: "my-service"
labels:
run: service
spec:
type: NodePort
selector:
run: "service"
ports:
- port: 9000
targetPort: 9000
protocol: TCP
name: http
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: service-api
namespace: "my-service"
labels:
run: service
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: service-v1
namespace: "my-service"
labels:
run: service
spec:
replicas: 1
selector:
matchLabels:
run: service
template:
metadata:
labels:
run: service
spec:
serviceAccountName: service-api
containers:
- name: service
image: "gcr.io/test/service:latest"
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9000
protocol: TCP
volumeMounts:
- name: test
mountPath: /usr/test
volumes:
- name: test
emptyDir: {}
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: "service-ingress"
namespace: "my-service"
labels:
run: service
spec:
rules:
- http:
paths:
- path: /*
backend:
serviceName: service
servicePort: 9000
If I log into the pod I can curl the service on the nodePort designed IP, but if I try to hit the ingress address I just get a error:
I am not sure why there are 2 backend services on the loadbalancer that is created automatically, the one that points to my app shows as unhealthy
[load balancer backends1
You need to define readiness probe in your pod spec because GKE ingress controller picks up health check from the readiness probe.
apiVersion: apps/v1
kind: Deployment
metadata:
name: service-v1
namespace: "my-service"
labels:
run: service
spec:
replicas: 1
selector:
matchLabels:
run: service
template:
metadata:
labels:
run: service
spec:
serviceAccountName: service-api
containers:
- name: service
image: "gcr.io/test/service:latest"
imagePullPolicy: IfNotPresent
readinessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 3
periodSeconds: 3
ports:
- containerPort: 9000
protocol: TCP
volumeMounts:
- name: test
mountPath: /usr/test
volumes:
- name: test
emptyDir: {}

ERROR! no action detected in task, ansible

I am using ansible version 2.5.1 with python version 2.7.17 and I installed an open shift.
The playbook looks like this:
---
- hosts: node 1
tasks:
- name: Create a k8s namespace
k8s:
name: CC_Namespace
api_version: v1
kind: Namespace
state: present
# Deployment Frontend
- name: Create a Frontend Deployment Object
k8s:
apiVersion: v1
kind: Deployment
metadata:
name: nginx-frontend-deployment
labels:
app: nginx
spec:
replicas: 4
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
livenessProbe:
exec:
command:
- /ready
readinessProbe:
exec:
command:
- /ready
# Deployment Backend
- name: Create a Backend Deployment Object
k8s:
apiVersion: v1
kind: Deployment
metadata:
name: nginx-backend-deployment
labels:
app: nginx
spec:
replicas: 6
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9 # change to Dockerfile
ports:
- containerPort: 80
livenessProbe:
exec:
command:
- /ready
readinessProbe:
exec:
command:
- /ready
# Service Backend
- name: Create a Backend Service Object
k8s:
apiVersion: v1
kind: Service
metadata:
name: cc-backend-service
spec:
selector:
app: CCApp
ports:
- protocol: TCP
port: 80
type: ClusterIP
# Serive Frontend
- name: Create a Frontend Service Object
k8s:
apiVersion: v1
kind: Service
metadata:
name: cc-frontend-service
spec:
selector:
app: CCApp
ports:
- protocol: TCP
port: 80
type: NodePort
and this is the error:
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
ERROR! no action detected in task. This often indicates a misspelled module name, or incorrect module path.
The error appears to have been in '/home/rocco/cc-webapp.yml': line 4, column 5, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
tasks:
- name: Create a k8s namespace
^ here
The minimum ansible version to have k8s module available is 2.6. (Reference)
No choice, you have to upgrade.
Note: I tested your playbook syntax without any errors in ansible 2.9.2

Openshift ImageChange trigger gets deleted in Deploymentconfig when applying templage

I am currently working on a template for OpenShift and my ImageChange trigger gets deleted when I initally instantiate the application. My Template contains the following objects
ImageStream
BuildConfig
Service
Route
Deploymentconfig
I guess the route is irrelevant but this is what it looks like so far (for better overview I will post the objects seperated, but they are all items in my Template)
ImageStream
- kind: ImageStream
apiVersion: v1
metadata:
labels:
app: my-app
name: my-app
namespace: ${IMAGE_NAMESPACE}
BuildConfig
- kind: BuildConfig
apiVersion: v1
metadata:
labels:
app: my-app
deploymentconfig: my-app
name: my-app
namespace: ${IMAGE_NAMESPACE}
selfLink: /oapi/v1/namespaces/${IMAGE_NAMESPACE}/buildconfigs/my-app
spec:
runPolicy: Serial
source:
git:
ref: pre-prod
uri: 'ssh://git#git.myreopo.net:port/project/my-app.git'
sourceSecret:
name: git-secret
type: Git
strategy:
type: Source
sourceStrategy:
env:
- name: HTTP_PROXY
value: 'http://user:password#proxy.com:8080'
- name: HTTPS_PROXY
value: 'http://user:password#proxy.com:8080'
- name: NO_PROXY
value: .something.net
from:
kind: ImageStreamTag
name: 'nodejs:8'
namespace: openshift
output:
to:
kind: ImageStreamTag
name: 'my-app:latest'
namespace: ${IMAGE_NAMESPACE}
Service
- kind: Service
apiVersion: v1
metadata:
name: my-app
labels:
app: my-app
spec:
selector:
deploymentconfig: my-app
ports:
- name: 8080-tcp
port: 8080
protocol: TCP
targetPort: 8080
sessionAffinity: None
type: ClusterIP
DeploymentConfig
Now what is already weird in the DeploymentConfig is that under spec.template.spec.containers[0].image I have to specify the full path to the repository to make it work, otherwise I get an error pulling the image. (even though documentation says my-app:latest would be correct)
- kind: DeploymentConfig
apiVersion: v1
metadata:
labels:
app: my-app
deploymentconfig: my-app
name: my-app
namespace: ${IMAGE_NAMESPACE}
selfLink: /oapi/v1/namespaces/${IMAGE_NAMESPACE}/deploymentconfigs/my-app
spec:
selector:
app: my-app
deploymentconfig: my-app
strategy:
type: Rolling
rollingParams:
intervalSeconds: 1
maxSurge: 25%
maxUnavailability: 25%
timeoutSeconds: 600
updatePeriodSeconds: 1
replicas: 1
template:
metadata:
labels:
app: my-app
deploymentconfig: my-app
spec:
containers:
- name: my-app-container
image: "${REPOSITORY_IP}:${REPOSITORY_PORT}/${IMAGE_NAMESPACE}/my-app:latest"
imagePullPolicy: Always
ports:
- containerPort: 8080
protocol: TCP
- containerPort: 8081
protocol: TCP
env:
- name: MONGODB_USERNAME
valueFrom:
secretKeyRef:
name: my-app-database
key: database-user
- name: MONGODB_PASSWORD
valueFrom:
secretKeyRef:
name: my-app-database
key: database-password
- name: MONGODB_DATABASE
value: "myapp"
- name: ROUTE_PATH
value: /my-app
- name: MONGODB_AUTHDB
value: "myapp"
- name: MONGODB_PORT
value: "27017"
- name: HTTP_PORT
value: "8080"
- name: HTTPS_PORT
value: "8082"
restartPolicy: Always
dnsPolicy: ClusterFirst
triggers:
- type: ImageChange
imageChangeParams:
automatic: true
from:
kind: ImageStreamTag
name: 'my-app:latest'
namespace: ${IMAGE_NAMESPACE}
containerNames:
- my-app-container
- type: ConfigChange
I deploy the application using
oc process -f ./openshift/template.yaml ..Parameters... | oc apply -f -
But the outcome is the same when I use oc new-app.
The weird thing is. The application gets deployed and is running fine, but image changes will have no effect. So I exported DeploymentConfig and found that it was missing the ImageChangeTrigger leaving the trigger part being
triggers:
- type: ConfigChange
At first I thought this was due to the fact that maybe the build was not ready when I tried to apply the DeploymentConfig so I created a build first and waited for it to finish. Afterwards I deployed the rest of the application (Service, Route, DeploymentConfig). The outcome was the same however. If I use the Webgui and change the DeploymentConfig there from
to this, fill out namespace, app and tag (latest) and hit apply everything works as it should. I just can't figure out why the trigger is beeing ignored initially. Would be great if someone has an idea where I'm wrong
Versions I am using are
oc: v3.9.0
kubernetes: v1.6.1
openshift v3.6.173.0.140
OK the answer was pretty simple. Turned out it was just an indentation error in the yaml file for the DeploymentConfig. Instead of
dnsPolicy: ClusterFirst
restartPolicy: Always
terminationGracePeriodSeconds: 30
triggers:
- type: ImageChange
imageChangeParams:
automatic: true
containerNames:
- alpac-studio-container
from:
kind: ImageStreamTag
name: alpac-studio:latest
- type: ConfigChange
It has to be
dnsPolicy: ClusterFirst
restartPolicy: Always
terminationGracePeriodSeconds: 30
triggers:
- type: ImageChange
imageChangeParams:
automatic: true
containerNames:
- alpac-studio-container
from:
kind: ImageStreamTag
name: alpac-studio:latest
- type: ConfigChange
So the triggers have to be on the same level as e.g. template and strategy