Openshift ImageChange trigger gets deleted in Deploymentconfig when applying templage - deployment

I am currently working on a template for OpenShift and my ImageChange trigger gets deleted when I initally instantiate the application. My Template contains the following objects
ImageStream
BuildConfig
Service
Route
Deploymentconfig
I guess the route is irrelevant but this is what it looks like so far (for better overview I will post the objects seperated, but they are all items in my Template)
ImageStream
- kind: ImageStream
apiVersion: v1
metadata:
labels:
app: my-app
name: my-app
namespace: ${IMAGE_NAMESPACE}
BuildConfig
- kind: BuildConfig
apiVersion: v1
metadata:
labels:
app: my-app
deploymentconfig: my-app
name: my-app
namespace: ${IMAGE_NAMESPACE}
selfLink: /oapi/v1/namespaces/${IMAGE_NAMESPACE}/buildconfigs/my-app
spec:
runPolicy: Serial
source:
git:
ref: pre-prod
uri: 'ssh://git#git.myreopo.net:port/project/my-app.git'
sourceSecret:
name: git-secret
type: Git
strategy:
type: Source
sourceStrategy:
env:
- name: HTTP_PROXY
value: 'http://user:password#proxy.com:8080'
- name: HTTPS_PROXY
value: 'http://user:password#proxy.com:8080'
- name: NO_PROXY
value: .something.net
from:
kind: ImageStreamTag
name: 'nodejs:8'
namespace: openshift
output:
to:
kind: ImageStreamTag
name: 'my-app:latest'
namespace: ${IMAGE_NAMESPACE}
Service
- kind: Service
apiVersion: v1
metadata:
name: my-app
labels:
app: my-app
spec:
selector:
deploymentconfig: my-app
ports:
- name: 8080-tcp
port: 8080
protocol: TCP
targetPort: 8080
sessionAffinity: None
type: ClusterIP
DeploymentConfig
Now what is already weird in the DeploymentConfig is that under spec.template.spec.containers[0].image I have to specify the full path to the repository to make it work, otherwise I get an error pulling the image. (even though documentation says my-app:latest would be correct)
- kind: DeploymentConfig
apiVersion: v1
metadata:
labels:
app: my-app
deploymentconfig: my-app
name: my-app
namespace: ${IMAGE_NAMESPACE}
selfLink: /oapi/v1/namespaces/${IMAGE_NAMESPACE}/deploymentconfigs/my-app
spec:
selector:
app: my-app
deploymentconfig: my-app
strategy:
type: Rolling
rollingParams:
intervalSeconds: 1
maxSurge: 25%
maxUnavailability: 25%
timeoutSeconds: 600
updatePeriodSeconds: 1
replicas: 1
template:
metadata:
labels:
app: my-app
deploymentconfig: my-app
spec:
containers:
- name: my-app-container
image: "${REPOSITORY_IP}:${REPOSITORY_PORT}/${IMAGE_NAMESPACE}/my-app:latest"
imagePullPolicy: Always
ports:
- containerPort: 8080
protocol: TCP
- containerPort: 8081
protocol: TCP
env:
- name: MONGODB_USERNAME
valueFrom:
secretKeyRef:
name: my-app-database
key: database-user
- name: MONGODB_PASSWORD
valueFrom:
secretKeyRef:
name: my-app-database
key: database-password
- name: MONGODB_DATABASE
value: "myapp"
- name: ROUTE_PATH
value: /my-app
- name: MONGODB_AUTHDB
value: "myapp"
- name: MONGODB_PORT
value: "27017"
- name: HTTP_PORT
value: "8080"
- name: HTTPS_PORT
value: "8082"
restartPolicy: Always
dnsPolicy: ClusterFirst
triggers:
- type: ImageChange
imageChangeParams:
automatic: true
from:
kind: ImageStreamTag
name: 'my-app:latest'
namespace: ${IMAGE_NAMESPACE}
containerNames:
- my-app-container
- type: ConfigChange
I deploy the application using
oc process -f ./openshift/template.yaml ..Parameters... | oc apply -f -
But the outcome is the same when I use oc new-app.
The weird thing is. The application gets deployed and is running fine, but image changes will have no effect. So I exported DeploymentConfig and found that it was missing the ImageChangeTrigger leaving the trigger part being
triggers:
- type: ConfigChange
At first I thought this was due to the fact that maybe the build was not ready when I tried to apply the DeploymentConfig so I created a build first and waited for it to finish. Afterwards I deployed the rest of the application (Service, Route, DeploymentConfig). The outcome was the same however. If I use the Webgui and change the DeploymentConfig there from
to this, fill out namespace, app and tag (latest) and hit apply everything works as it should. I just can't figure out why the trigger is beeing ignored initially. Would be great if someone has an idea where I'm wrong
Versions I am using are
oc: v3.9.0
kubernetes: v1.6.1
openshift v3.6.173.0.140

OK the answer was pretty simple. Turned out it was just an indentation error in the yaml file for the DeploymentConfig. Instead of
dnsPolicy: ClusterFirst
restartPolicy: Always
terminationGracePeriodSeconds: 30
triggers:
- type: ImageChange
imageChangeParams:
automatic: true
containerNames:
- alpac-studio-container
from:
kind: ImageStreamTag
name: alpac-studio:latest
- type: ConfigChange
It has to be
dnsPolicy: ClusterFirst
restartPolicy: Always
terminationGracePeriodSeconds: 30
triggers:
- type: ImageChange
imageChangeParams:
automatic: true
containerNames:
- alpac-studio-container
from:
kind: ImageStreamTag
name: alpac-studio:latest
- type: ConfigChange
So the triggers have to be on the same level as e.g. template and strategy

Related

Cannot Connect Kubernetes Secrets to Kubernetes Deployment (Values Are Empty)

I have a Golang Microservice Application which has following Kubernetes Manifest Configuration...
apiVersion: v1 # Service for accessing store application (this) from Ingress...
kind: Service
metadata:
name: store-internal-service
namespace: store-namespace
spec:
type: ClusterIP
selector:
app: store-internal-service
ports:
- name: http
port: 8000
targetPort: 8000
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: store-application-service
namespace: store-namespace
labels:
app: store-application-service
spec:
selector:
matchLabels:
app: store-internal-service
template:
metadata:
labels:
app: store-internal-service
spec:
containers:
- name: store-application
image: <image>
envFrom:
- secretRef:
name: project-secret-store
ports:
- containerPort: 8000
protocol: TCP
imagePullPolicy: Always
env:
- name: APPLICATION_PORT
value: "8000"
- name: APPLICATION_HOST
value: "localhost"
terminationGracePeriodSeconds: 60
---
apiVersion: v1
kind: Secret
metadata:
name: project-secret-store
namespace: store-namespace
type: Opaque
stringData:
# Prometheus Server Credentials...
PROMETHEUS_HOST: "prometheus-internal-service"
PROMETHEUS_PORT: "9090"
# POSTGRESQL CONFIGURATION.
DATABASE_HOST: "postgres-internal-service"
DATABASE_PORT: "5432"
DATABASE_USER: "postgres_user"
DATABASE_PASSWORD: "postgres_password"
DATABASE_NAME: "store_db"
And Also for Test Purposes, I've specified following Variables in order to receive values from secrets in my application..
var (
POSTGRES_USER = os.Getenv("DATABASE_USER")
POSTGRES_PASSWORD = os.Getenv("DATABASE_PASSWORD")
POSTGRES_DATABASE = os.Getenv("DATABASE_NAME")
POSTGRES_HOST = os.Getenv("DATABASE_HOST")
POSTGRES_PORT = os.Getenv("DATABASE_PORT")
)
The Problem is when run my application, and after some time go check the logs of my application using kubectl logs <my-application-pod-name> --namespace=store-namespace, turns out that all this Golang variables are empty, despite the fact that they all has been declared in the Secret...
There is probably some other issues, that can cause this problem, but if there is some errors in configuration to point out, please share with your thoughts about it :)

How to automatically deploy new image to k8s cluster as it was pushed to docker registry?

I have configured dockerhub to build a new image with tags latest and dev-<version> as new tag <version> is appeared in GitHub. I have no idea how to configure Tekton or any other cloud-native tool to automatically deploy new images as they become available at the registry.
Here's my k8s configuration:
apiVersion: v1
kind: List
items:
- apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-app-ingress
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: my-app-service
port:
number: 80
- apiVersion: v1
kind: Service
metadata:
name: my-app-service
spec:
ports:
- port: 80
targetPort: 8000
selector:
app: my-app
type: LoadBalancer
- apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app-local-deployment
labels:
app: my-app
type: web
spec:
replicas: 2
minReadySeconds: 15
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 25%
maxSurge: 1
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
imagePullSecrets:
- name: regcred
containers:
- name: backend
image: zuber93/my-app:dev-latest
imagePullPolicy: IfNotPresent
envFrom:
- secretRef:
name: my-app-local-secret
ports:
- containerPort: 8000
readinessProbe:
httpGet:
path: /flvby
port: 8000
initialDelaySeconds: 10
periodSeconds: 5
- name: celery
image: zuber93/my-app:dev-latest
imagePullPolicy: IfNotPresent
workingDir: /code
command: [ "/code/run/celery.sh" ]
envFrom:
- secretRef:
name: my-app-local-secret
- name: redis
image: redis:latest
imagePullPolicy: IfNotPresent
Short answer is:
Either setup a webhook from dockerhub (https://docs.docker.com/docker-hub/webhooks/) to tekton using triggers.
Or (depends on your security and if your cluster is reachable from www or not)
Poll dockerhub and trigger tekton upon new image detection.
(This can be done in many different ways, simple instant service, scheduled cronjob, etc in k8s)
So, you choose push or pull. ;)
I would ask "why not trigger from your git repo directly?"
Finally, I've explored keel.sh. It can handle appearing of new images and deploy them into the cluster.

Communication between pods

I am currently in the process to set up sentry.io but i am having problems in setting it up in openshift 3.11
I got pods running for sentry itself, postgresql, redis and memcache but according to the log messages they are not able to communicate together.
sentry.exceptions.InvalidConfiguration: Error 111 connecting to 127.0.0.1:6379. Connection refused.
Do i need to create a network like in docker or should the pods (all in the same namespace) be able to talk to each other by default? I got admin rights for the complete project so i can also work with the console and not only the web interface.
Best wishes
EDIT: Adding deployment config for sentry and its service and for the sake of simplicity the postgres config and service. I also blanked out some unnecessary information with the keyword BLANK if I went overboard please let me know and ill look it up.
Deployment config for sentry:
apiVersion: apps.openshift.io/v1
kind: DeploymentConfig
metadata:
annotations:
openshift.io/generated-by: OpenShiftWebConsole
creationTimestamp: BLANK
generation: 20
labels:
app: sentry
name: sentry
namespace: test
resourceVersion: '506667843'
selfLink: BLANK
uid: BLANK
spec:
replicas: 1
selector:
app: sentry
deploymentconfig: sentry
strategy:
activeDeadlineSeconds: 21600
resources: {}
rollingParams:
intervalSeconds: 1
maxSurge: 25%
maxUnavailable: 25%
timeoutSeconds: 600
updatePeriodSeconds: 1
type: Rolling
template:
metadata:
annotations:
openshift.io/generated-by: OpenShiftWebConsole
creationTimestamp: null
labels:
app: sentry
deploymentconfig: sentry
spec:
containers:
- env:
- name: SENTRY_SECRET_KEY
value: Iamsosecret
- name: C_FORCE_ROOT
value: '1'
- name: SENTRY_FILESTORE_DIR
value: /var/lib/sentry/files/data
image: BLANK
imagePullPolicy: Always
name: sentry
ports:
- containerPort: 9000
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/lib/sentry/files
name: sentry-1
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
volumes:
- emptyDir: {}
name: sentry-1
test: false
triggers:
- type: ConfigChange
- imageChangeParams:
automatic: true
containerNames:
- sentry
from:
kind: ImageStreamTag
name: 'sentry:latest'
namespace: catcloud
lastTriggeredImage: BLANK
type: ImageChange
status:
availableReplicas: 1
conditions:
- lastTransitionTime: BLANK
lastUpdateTime: BLANK
message: Deployment config has minimum availability.
status: 'True'
type: Available
- lastTransitionTime: BLANK
lastUpdateTime: BLANK
message: replication controller "sentry-19" successfully rolled out
reason: NewReplicationControllerAvailable
status: 'True'
type: Progressing
details:
causes:
- type: ConfigChange
message: config change
latestVersion: 19
observedGeneration: 20
readyReplicas: 1
replicas: 1
unavailableReplicas: 0
updatedReplicas: 1
Service for sentry:
apiVersion: v1
kind: Service
metadata:
annotations:
openshift.io/generated-by: OpenShiftWebConsole
creationTimestamp: BLANK
labels:
app: sentry
name: sentry
namespace: test
resourceVersion: '505555608'
selfLink: BLANK
uid: BLANK
spec:
clusterIP: BLANK
ports:
- name: 9000-tcp
port: 9000
protocol: TCP
targetPort: 9000
selector:
deploymentconfig: sentry
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
Deployment config for postgresql:
apiVersion: apps.openshift.io/v1
kind: DeploymentConfig
metadata:
annotations:
openshift.io/generated-by: OpenShiftWebConsole
creationTimestamp: BLANK
generation: 10
labels:
app: postgres
type: backend
name: postgres
namespace: test
resourceVersion: '506664185'
selfLink: BLANK
uid: BLANK
spec:
replicas: 1
selector:
app: postgres
deploymentconfig: postgres
type: backend
strategy:
activeDeadlineSeconds: 21600
resources: {}
rollingParams:
intervalSeconds: 1
maxSurge: 25%
maxUnavailable: 25%
timeoutSeconds: 600
updatePeriodSeconds: 1
type: Rolling
template:
metadata:
annotations:
openshift.io/generated-by: OpenShiftWebConsole
creationTimestamp: null
labels:
app: postgres
deploymentconfig: postgres
type: backend
spec:
containers:
- env:
- name: PGDATA
value: /var/lib/postgresql/data/sql
- name: POSTGRES_HOST_AUTH_METHOD
value: trust
- name: POSTGRESQL_USER
value: sentry
- name: POSTGRESQL_PASSWORD
value: sentry
- name: POSTGRESQL_DATABASE
value: sentry
image: BLANK
imagePullPolicy: Always
name: postgres
ports:
- containerPort: 5432
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: volume-uirge
subPath: sql
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext:
runAsUser: 2000020900
terminationGracePeriodSeconds: 30
volumes:
- name: volume-uirge
persistentVolumeClaim:
claimName: postgressql
test: false
triggers:
- type: ConfigChange
- imageChangeParams:
automatic: true
containerNames:
- postgres
from:
kind: ImageStreamTag
name: 'postgres:latest'
namespace: catcloud
lastTriggeredImage: BLANK
type: ImageChange
status:
availableReplicas: 1
conditions:
- lastTransitionTime: BLANK
lastUpdateTime: BLANK
message: Deployment config has minimum availability.
status: 'True'
type: Available
- lastTransitionTime: BLANK
lastUpdateTime: BLANK
message: replication controller "postgres-9" successfully rolled out
reason: NewReplicationControllerAvailable
status: 'True'
type: Progressing
details:
causes:
- type: ConfigChange
message: config change
latestVersion: 9
observedGeneration: 10
readyReplicas: 1
replicas: 1
unavailableReplicas: 0
updatedReplicas: 1
Service config postgresql:
apiVersion: v1
kind: Service
metadata:
annotations:
openshift.io/generated-by: OpenShiftWebConsole
creationTimestamp: BLANK
labels:
app: postgres
type: backend
name: postgres
namespace: catcloud
resourceVersion: '506548841'
selfLink: /api/v1/namespaces/catcloud/services/postgres
uid: BLANK
spec:
clusterIP: BLANK
ports:
- name: 5432-tcp
port: 5432
protocol: TCP
targetPort: 5432
selector:
deploymentconfig: postgres
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
Pods (even in the same namespace) are not able to talk directly to each other by default. You need to create a Service in order to allow a pod to receive connections from another pod. In general, one pod connects to another pod via the latter's service, as I illustrated below:
The connection info would look something like <servicename>:<serviceport> (e.g. elasticsearch-master:9200) rather than localhost:port.
You can read https://kubernetes.io/docs/concepts/services-networking/service/ for further info on a service.
N.B: localhost:port will only work for containers running inside the same pod to connect to each other, just like how nginx connects to gravitee-mgmt-api and gravitee-mgmt-ui in my illustration above.
Well for me it looks like you didn't configure the sentry correctly means you are not providing credential to sentry pod to connect to PostgreSQL pod and redis pod.
env:
- name: SENTRY_SECRET_KEY
valueFrom:
secretKeyRef:
name: sentry-sentry
key: sentry-secret
- name: SENTRY_DB_USER
value: "sentry"
- name: SENTRY_DB_NAME
value: "sentry"
- name: SENTRY_DB_PASSWORD
valueFrom:
secretKeyRef:
name: sentry-postgresql
key: postgres-password
- name: SENTRY_POSTGRES_HOST
value: sentry-postgresql
- name: SENTRY_POSTGRES_PORT
value: "5432"
- name: SENTRY_REDIS_PASSWORD
valueFrom:
secretKeyRef:
name: sentry-redis
key: redis-password
- name: SENTRY_REDIS_HOST
value: sentry-redis
- name: SENTRY_REDIS_PORT
value: "6379"
- name: SENTRY_EMAIL_HOST
value: "smtp"
- name: SENTRY_EMAIL_PORT
value: "25"
- name: SENTRY_EMAIL_USER
value: ""
- name: SENTRY_EMAIL_PASSWORD
valueFrom:
secretKeyRef:
name: sentry-sentry
key: smtp-password
- name: SENTRY_EMAIL_USE_TLS
value: "false"
- name: SENTRY_SERVER_EMAIL
value: "sentry#sentry.local"
for more info you could refer to this where they configured the sentry
https://github.com/maty21/sentry-kubernetes/blob/master/sentry.yaml
For communication between pods localhost or 127.0.0.1 does not work.
Get the IP of any pod using
kubectl describe podname
Use that IP in the other pod to communicate with above pod.
Since Pod IPs changes if the pod is recreated you should ideally use kubernetes service specifically clusterIP type for communication between pods within the cluster.

SonarQube + Postgresql Connection refused error in Kubernetes Cluster

sonar-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: sonarqube
spec:
replicas: 1
selector:
matchLabels:
app: sonarqube
template:
metadata:
labels:
app: sonarqube
spec:
containers:
- image: 10.150.0.131/devops/sonarqube:1.0
args:
- -Dsonar.web.context=/sonar
name: sonarqube
env:
- name: SONARQUBE_JDBC_PASSWORD
valueFrom:
secretKeyRef:
name: postgres-pwd
key: password
- name: SONARQUBE_JDBC_URL
value: jdbc:postgresql://sonar-postgres:5432/sonar
ports:
- containerPort: 9000
name: sonarqube
sonar-service.yaml
apiVersion: v1
kind: Service
metadata:
labels:
name: sonarqube
name: sonarqube
spec:
type: NodePort
ports:
- port: 80
targetPort: 9000
name: sonarport
selector:
name: sonarqube
sonar-postgres-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: sonar-postgres
spec:
replicas: 1
selector:
matchLabels:
app: sonar-postgres
template:
metadata:
labels:
app: sonar-postgres
spec:
containers:
- image: 10.150.0.131/devops/postgres:12.1
name: sonar-postgres
env:
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: postgres-pwd
key: password
- name: POSTGRES_USER
value: sonar
ports:
- containerPort: 5432
name: postgresport
volumeMounts:
# This name must match the volumes.name below.
- name: data-disk
mountPath: /var/lib/postgresql/data
volumes:
- name: data-disk
persistentVolumeClaim:
claimName: claim-postgres
sonar-postgresql-service.yaml
apiVersion: v1
kind: Service
metadata:
labels:
name: sonar-postgres
name: sonar-postgres
spec:
ports:
- port: 5432
selector:
name: sonar-postgres
Kubernetes Version:1.18.0
Docker Version : 19.03
**I am having a connection problem between the Sonarqube pod and the Postgresql pod.
I use the flannel network plug.
Can you help with the error?
Postgresql pod log value does not come.
**
ERROR
Try with:
apiVersion: v1
kind: Service
metadata:
labels:
name: sonar-postgres
name: sonar-postgres
spec:
ports:
- port: 5432
selector:
app: sonar-postgres
because it looks like your selector is wrong. The same issue with sonar-service.yaml, change name to app and it should work.
If you installed postgresql on the sql cloud service, it is necessary to release the firewall access ip. To validate this question, try adding the 0.0.0.0/0 ip, it will release everything, but placing the correct sonar ip is the best solution

How to "kubectl get ep" in deployment.yaml

I have a kubernetes deployment using environment variables and I wonder how to set dynamic endpoints in it.
For the moment, I use
$ kubectl get ep rtspcroatia
NAME ENDPOINTS AGE
rtspcroatia 172.17.0.8:8554 3h33m
And copy/paste the endpoint's value in my deployment.yaml. For me, it's not the right way to do it, but I can't find another method..
Here is a part of my deployment.yaml :
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
io.kompose.service: person-cam0
name: person-cam0
spec:
template:
metadata:
labels:
io.kompose.service: person-cam0
spec:
containers:
- env:
- name: S2_LOGOS_INPUT_ADDRESS
value: rtsp://172.17.0.8:8554/live.sdp
image: ******************
name: person-cam0
EDIT : And the service of the rtsp container
apiVersion: v1
kind: Service
metadata:
labels:
io.kompose.service: rtspcroatia
name: rtspcroatia
spec:
type: LoadBalancer
ports:
- protocol: TCP
port: 8551
targetPort: 8554
selector:
io.kompose.service: rtspcroatia
Can you help me to have something like :
containers:
- env:
- name: S2_LOGOS_INPUT_ADDRESS
value: rtsp://$ENDPOINT_ADDR:$ENDPOINT_PORT/live.sdp
Thank you !
You could set dynamic ENDPOINTS values like "POD_IP:SERVICE_PORT" as shown on below sample yaml code.
containers:
- env:
- name: MY_ENDPOINT_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: S2_LOGOS_INPUT_ADDRESS
value: rtsp://$MY_ENDPOINT_IP:$RTSPCROATI_SERVICE_PORT/live.sdp