ClassNotFoundException org.postgresql.Driver when connecting with database for Guacamole - postgresql

I'm trying to deploy Guacamole on a Kubernetes cluster. Firstly I've had problems with the particular Authentication type 10 not supported issue. According to this issue ticket on their jira page it's already fixed on their GitHub repo but has to be released yet.
So #Stavros Kois in the comments pointed out to me that they're using a temporary solution till the next release of Guacamole. I've copied the two scripts: 3-temp-hack and 4-temp-hack from their file and used it in my own deployment.
The deployment file is looking like this:
apiVersion: v1
kind: Service
metadata:
name: guacamole
namespace: $NAMESPACE
labels:
app: guacamole
spec:
ports:
- name: http
port: 80
targetPort: 8080
selector:
app: guacamole
type: ClusterIP
---
apiVersion: v1
kind: Service
metadata:
name: guacd
namespace: $NAMESPACE
labels:
app: guacamole
spec:
ports:
- name: http
port: 4822
targetPort: 4822
selector:
app: guacamole
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: guacamole
namespace: $NAMESPACE
labels:
app: guacamole
spec:
replicas: 1
selector:
matchLabels:
app: guacamole
template:
metadata:
labels:
app: guacamole
spec:
containers:
- name: guacd
image: docker.io/guacamole/guacd:$GUACAMOLE_GUACD_VERSION
env:
- name: GUACD_LOG_LEVEL
value: "debug"
ports:
- containerPort: 4822
securityContext:
runAsUser: 1000
runAsGroup: 1000
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
- name: guacamole
image: docker.io/guacamole/guacamole:$GUACAMOLE_GUACAMOLE_VERSION
env:
- name: GUACD_HOSTNAME
value: "localhost"
- name: GUACD_PORT
value: "4822"
- name: POSTGRES_HOSTNAME
value: "database-url.nl"
- name: POSTGRES_PORT
value: "5432"
- name: POSTGRES_DATABASE
value: "guacamole"
- name: POSTGRES_USER
value: "guacamole_admin"
- name: POSTGRES_PASSWORD
value: "guacamoleadmin"
- name: HOME
value: "/home/guacamole"
- name: LOGBACK_LEVEL
value: "debug"
ports:
- name: http
containerPort: 8080
- name: https
containerPort: 8443
securityContext:
runAsUser: 1001
runAsGroup: 1001
allowPrivilegeEscalation: false
runAsNonRoot: true
# TEMPORARY WORKAROUND UNTIL GUACAMOLE RELEASE AN IMAGE WITH THE UPDATED DRIVER
volumeMounts:
- name: temphackalso
mountPath: "/opt/guacamole/postgresql"
volumes:
- name: temphack
persistentVolumeClaim:
claimName: temphack
- name: temphackalso
persistentVolumeClaim:
claimName: temphackalso
# TEMPORARY WORKAROUND UNTIL GUACAMOLE RELEASE AN IMAGE WITH THE UPDATED DRIVER
# https://issues.apache.org/jira/browse/GUACAMOLE-1433
# https://github.com/apache/guacamole-client/pull/655
initContainers:
- name: temp-hack-1
image: docker.io/guacamole/guacamole:$GUACAMOLE_GUACAMOLE_VERSION
securityContext:
runAsUser: 1001
runAsGroup: 1001
volumeMounts:
- name: temphack
mountPath: "/opt/guacamole/postgresql-hack"
command: ["/bin/sh", "-c"]
args:
- |-
echo "Checking postgresql driver version..."
if [ -e /opt/guacamole/postgresql/postgresql-42.2.24.jre7.jar ]; then
echo "Version found is correct."
exit 0
else
echo "Old version found. Will try to download a known-to-work version."
echo "Downloading (postgresql-42.2.24.jre7.jar)..."
curl -L "https://jdbc.postgresql.org/download/postgresql-42.2.24.jre7.jar" >"/opt/guacamole/postgresql-hack/postgresql-42.2.24.jre7.jar"
if [ -e /opt/guacamole/postgresql-hack/postgresql-42.2.24.jre7.jar ]; then
echo "Downloaded successfully!"
cp -r /opt/guacamole/postgresql/* /opt/guacamole/postgresql-hack/
if [ -e /opt/guacamole/postgresql-hack/postgresql-9.4-1201.jdbc41.jar ]; then
echo "Removing old version... (postgresql-9.4-1201.jdbc41.jar)"
rm "/opt/guacamole/postgresql-hack/postgresql-9.4-1201.jdbc41.jar"
if [ $? -eq 0 ]; then
echo "Removed successfully!"
else
echo "Failed to remove."
exit 1
fi
fi
else
echo "Failed to download."
exit 1
fi
fi
- name: temp-hack-2
image: docker.io/guacamole/guacamole:$GUACAMOLE_GUACAMOLE_VERSION
securityContext:
runAsUser: 1001
runAsGroup: 1001
volumeMounts:
- name: temphack
mountPath: "/opt/guacamole/postgresql-hack"
- name: temphackalso
mountPath: "/opt/guacamole/postgresql"
command: ["/bin/sh", "-c"]
args:
- |-
echo "Copying postgres driver into the final destination."
cp -r /opt/guacamole/postgresql-hack/* /opt/guacamole/postgresql/
if [ -e /opt/guacamole/postgresql/postgresql-42.2.24.jre7.jar ]; then
echo "Driver copied successfully!"
else
echo "Failed to copy the driver"
fi
So I've successfully copied the correct postgresql driver. The following commands returns the correct driver as expected: ls -lah /opt/guacamole/postgresql =
drwxrwxrwx 4 root root 4.0K Jul 28 15:46 .
drwxr-xr-x 1 root root 184 Dec 29 2021 ..
-rw-r--r-- 1 guacamole guacamole 5.9M Jul 28 17:06 guacamole-auth-jdbc-postgresql-1.4.0.jar
drwx------ 2 root root 16K Jul 28 15:45 lost+found
-rw-r--r-- 1 guacamole guacamole 0 Jan 1 1970 postgresql-42.2.24.jre7.jar
drwxr-xr-x 3 guacamole guacamole 4.0K Jul 28 15:46 schema
And: ls -lah /home/guacamole/.guacamole/lib =
drwxr-xr-x 2 guacamole guacamole 41 Jul 28 17:06 .
drwxr-xr-x 4 guacamole guacamole 82 Jul 28 17:06 ..
lrwxrwxrwx 1 guacamole guacamole 53 Jul 28 17:06 postgresql-42.2.24.jre7.jar -> /opt/guacamole/postgresql/postgresql-42.2.24.jre7.jar
But still getting the ClassNotFoundException org.postgresql.Driver. Anybody ideas?

Related

Access strimzi kafka externally using tls authentication

I have setup strimzi kafka on azure AKS and trying to access it from outside using TLS authentication but facing below error.We are using our own certificates and configured under listener section.
kafka yaml:
apiVersion: kafka.strimzi.io/v1beta2
kind: Kafka
metadata:
name: eesb-cluster
spec:
kafka:
version: 3.1.0
replicas: 3
logging:
type: inline
loggers:
kafka.root.logger.level: "INFO"
resources:
requests:
memory: 2Gi
cpu: "1"
limits:
memory: 4Gi
cpu: "2"
readinessProbe:
initialDelaySeconds: 15
timeoutSeconds: 5
periodSeconds: 10
successThreshold: 1
failureThreshold: 3
livenessProbe:
initialDelaySeconds: 15
timeoutSeconds: 5
periodSeconds: 10
successThreshold: 1
failureThreshold: 3
template:
pod:
imagePullSecrets:
- name: artifactory-kafka
# Kafka JVM
jvmOptions:
gcLoggingEnabled: true
-Xms: 1g
-Xmx: 2g
-XX:
UseG1GC: true
MaxGCPauseMillis: 20
InitiatingHeapOccupancyPercent: 35
ExplicitGCInvokesConcurrent: true
listeners:
- name: plain
port: 9092
type: internal
tls: false
- name: external
port: 9093
type: loadbalancer
tls: true
authentication:
type: tls
configuration:
bootstrap:
loadBalancerIP: 20.22.199.99
annotations:
external-dns.alpha.kubernetes.io/hostname: bootstrap.example.com
alternativeNames:
- bootstrap.example.com
brokers:
- broker: 0
loadBalancerIP: 20.22.199.100
annotations:
external-dns.alpha.kubernetes.io/hostname: kafka-0.example.com
advertisedHost: kafka-0.example.com](url)
- broker: 1
loadBalancerIP: 20.22.199.101
annotations:
external-dns.alpha.kubernetes.io/hostname: kafka-1.example.com
advertisedHost: kafka-1.example.com
- broker: 2
loadBalancerIP: 20.22.199.102
annotations:
external-dns.alpha.kubernetes.io/hostname: kafka-2.example.com
advertisedHost: kafka-2.example.com
brokerCertChainAndKey:
secretName: source-kafka-listener-cert
certificate: tls.crt
key: tls.key
config:
num.partitions: 1
offsets.topic.replication.factor: 3
log.retention.hours: 24
log.segment.bytes: 1073741824
log.retention.check.interval.ms: 300000
num.network.threads: 3
num.io.threads: 8
transaction.state.log.replication.factor: 3
transaction.state.log.min.isr: 2
default.replication.factor: 3
min.insync.replicas: 2
num.recovery.threads.per.data.dir: 1
inter.broker.protocol.version: "3.1"
auto.create.topics.enable: false
delete.topic.enable: true
# Rack awareness
# replica.selector.class: org.apache.kafka.common.replica.RackAwareReplicaSelector
# broker.rack:
#rack:
# topologyKey: topology.kubernetes.io/region=
# topologyKey: topology.kubernetes.io/zone=
storage:
type: jbod
volumes:
- deleteClaim: false
id: 0
size: 80Gi
type: persistent-claim
# Kafka Prometheus JMX Exporter
#metricsConfig:
#type: jmxPrometheusExporter
#valueFrom:
#configMapKeyRef:
#key: kafka-metrics-config.yml
#name: kafka-zk-metrics
#metrics:
#lowercaseOutputName: true
#kafkaExporter:
#groupRegex: .*
#topicRegex: .*
# Kafka JMX metrics
#jmxOptions:
#authentication:
#type: "password"
zookeeper:
replicas: 3
storage:
type: persistent-claim
size: 10Gi
deleteClaim: false
# Zookeeper loggers and log level
#logging:
#type: external
#name: kafka-logging-configmap
logging:
type: inline
loggers:
zookeeper.root.logger: "INFO"
resources:
requests:
memory: 1Gi
cpu: "1"
limits:
memory: 2Gi
cpu: "2"
# Zookeeper JVM
jvmOptions:
gcLoggingEnabled: true
-Xms: 1g
-Xmx: 1g
-XX:
UseG1GC: true
MaxGCPauseMillis: 20
InitiatingHeapOccupancyPercent: 35
ExplicitGCInvokesConcurrent: true
I have setup same kafka cert in authentication section of kafka connect and added all the CA certicate in truststore on the client side.
Kafka connect client yaml:
apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaConnect
metadata:
name: eesb-connect-cluster
annotations:
strimzi.io/use-connector-resources: "true"
spec:
version: 3.1.0
replicas: 2
image: packages.alm.eurofins.com/docker-eurofins-eesb-prerelease/com/eurofins/eesb/eesb-kafka-connect:0.0.1-SNAPSHOT
bootstrapServers: itaag108-uat2.eesb.eurofins.com:9093
authentication:
type: tls
certificateAndKey:
certificate: tls.crt
key: tls.key
secretName: source-kafka-listener-cert
tls:
trustedCertificates:
- certificate: ca1.crt
secretName: source-kafka-trust-cert
- certificate: ca2.crt
secretName: source-kafka-trust-cert
- certificate: ca3.crt
secretName: source-kafka-trust-cert
config:
key.converter: org.apache.kafka.connect.json.JsonConverter
value.converter: org.apache.kafka.connect.json.JsonConverter
key.converter.schemas.enable: true
value.converter.schemas.enable: true
offset.flush.timeout.ms: 120000
offset.flush.interval.ms: 10000
group.id: connect-cluster
offset.storage.topic: connect-cluster-offsets
config.storage.topic: connect-cluster-configs
status.storage.topic: connect-cluster-status
# -1 means it will use the default replication factor configured in the broker
config.storage.replication.factor: 2
offset.storage.replication.factor: 2
status.storage.replication.factor: 2
config.providers: file
config.providers.file.class: org.apache.kafka.common.config.provider.FileConfigProvider
resources:
requests:
cpu: "1"
memory: 512m
limits:
cpu: "2"
memory: 2Gi
logging:
type: inline
loggers:
log4j.rootLogger: "INFO"
readinessProbe:
initialDelaySeconds: 60
periodSeconds: 20
timeoutSeconds: 30
livenessProbe:
initialDelaySeconds: 60
periodSeconds: 20
timeoutSeconds: 30
template:
pod:
imagePullSecrets:
- name: artifactory-kafka
externalConfiguration:
volumes:
- name: connector-config
secret:
secretName: target-uat-db-auth
jvmOptions:
gcLoggingEnabled: false
-Xmx: 1024m
-Xms: 512m
-XX:
UseG1GC: true
MaxGCPauseMillis: 20
InitiatingHeapOccupancyPercent: 35
ExplicitGCInvokesConcurrent: true
UseStringDeduplication: true
I have not made any changes to cluster-ca-cert or clients-ca-cert automatically generated by strimzi.
Error:
2022-09-13 09:39:34,227 WARN [AdminClient clientId=adminclient-1] Metadata update failed due to authentication error
org.apache.kafka.clients.admin.internals.AdminMetadataManager) [kafka-admin-client-thread | adminclient-1]
org.apache.kafka.common.errors.SslAuthenticationExceptio
n: Failed to process post-handshake messages
Caused by: javax.net.ssl.SSLException: Tag mismatch!
please help on this issue.
Thanks,
Shiva

Mounting a ConfigMap as a volume in Kubernetes: how do I calculate the value of defaultMode?

Defining the defaultMode in a Kubernetes volume field within a deployment element can become quite tricky.
It expects three decimals, corresponding to the binary UNIX permissions.
As an example, to mount the ConfigMap with permissions r------, you'd need to specify 256.
apiVersion: apps/v1
kind: Deployment
metadata:
name: foo
namespace: foo
spec:
replicas: 1
selector:
matchLabels:
app: foo
template:
metadata:
labels:
app: foo
spec:
containers:
- image: php-fpm:latest
volumeMounts:
- name: phpini
mountPath: /usr/local/etc/php/conf.d/99-settings.ini
readOnly: true
subPath: 99-settings.ini
volumes:
- configMap:
defaultMode: 256
name: phpini-configmap
optional: false
name: phpini
---
apiVersion: v1
kind: ConfigMap
metadata:
labels:
app: foo
namespace: foo
name: phpini-configmap
data:
99-settings.ini: |
; Enable Zend OPcache extension module
zend_extension = opcache
Use the following table:
unix decimal
unix readable
binary equivalent
defaultMode
400
r--------
100000000
256
440
r--r-----
100100000
288
444
r--r--r--
100100100
292
600
rw-------
110000000
384
600
rw-r-----
110100000
416
660
rw-rw----
110110000
432
660
rw-rw-r--
110110100
436
666
rw-rw-rw-
110110110
438
700
rwx------
111000000
448
770
rwxrwx---
111111000
504
777
rwxrwxrwx
111111111
511
A more direct way to do this is to use a base8 to base10 converter like this one

about kubernet configmap mountPath with subPath

The pod yaml
containers:
- name: kiada
image: :kiada-0.1
volumeMounts:
- name: my-test
subPath: my-app.conf
mountPath: /html/my-app.conf
volumes:
- name: my-test
configMap:
name: kiada-config
the config map
➜ v5-kubernetes git:(master) ✗ k get cm kiada-config -oyaml
apiVersion: v1
data:
key: value\n
status-message: This status message is set in the kiada-config config map2\n
kind: ConfigMap
metadata:
creationTimestamp: "2022-05-18T03:01:15Z"
name: kiada-config
namespace: default
resourceVersion: "135185128"
uid: 8c8875ce-47f5-49d4-8bc7-d8dbc2d7f7ba
the pod has my-app.conf
root#kiada2-7cc7bf55d8-m97tt:/# ls -al /html/my-app.conf/
total 12
drwxrwxrwx 3 root root 4096 May 21 02:29 .
drwxr-xr-x 1 root root 4096 May 21 02:29 ..
drwxr-xr-x 2 root root 4096 May 21 02:29 ..2022_05_21_02_29_41.554311630
lrwxrwxrwx 1 root root 31 May 21 02:29 ..data -> ..2022_05_21_02_29_41.554311630
lrwxrwxrwx 1 root root 10 May 21 02:29 key -> ..data/key
lrwxrwxrwx 1 root root 21 May 21 02:29 status-message -> ..data/status-message
root#kiada2-7cc7bf55d8-m97tt:/# ls -al /html/my-app.conf/
if i add subPath in pod yaml
spec:
containers:
- name: kiada
image: kiada-0.1
volumeMounts:
- name: my-test
subPath: my-app.conf
mountPath: /html/my-app.conf
volumes:
- name: my-test
configMap:
name: kiada-config
the resutl
root#kiada2-c89749c8-x9qwq:/# ls -al html/my-app.conf/
total 8
drwxrwxrwx 2 root root 4096 May 21 02:36 .
drwxr-xr-x 1 root root 4096 May 21 02:36 ..
why i use subPath,the config map key is not exists ,what's wrong?
In order to produce a file called my-app.config containing your application config in your Pod's file system, would have to ensure that this file exists in your config map:
apiVersion: v1
kind: ConfigMap
metadata:
name: kiada-config
data:
my-app.conf: |
key: value
status-message: This status message is set in the kiada-config config map2
Then, you can mount it into your Pod like this:
apiVersion: v1
kind: Pod
metadata:
labels:
run: kiada
name: kiada
spec:
containers:
- name: kiada
image: k8s.gcr.io/busybox
command: [ "/bin/sh", "-c", "tail -f /dev/null" ]
volumeMounts:
- mountPath: /html/
name: my-test
volumes:
- name: my-test
configMap:
name: kiada-config
The subPath field is not required in this scenario. It would be useful either if you want to remap the my-app.conf to a different name..
apiVersion: v1
kind: Pod
metadata:
labels:
run: kiada
name: kiada
spec:
containers:
- name: kiada
image: k8s.gcr.io/busybox
command: [ "/bin/sh", "-c", "tail -f /dev/null" ]
volumeMounts:
- mountPath: /html/my-app-new-name.conf
name: my-test
subPath: my-app.conf
volumes:
- name: my-test
configMap:
name: kiada-config
..or, if you had multiple config files in your ConfigMap and just wanted to map one of them into your Pod:
apiVersion: v1
kind: ConfigMap
metadata:
name: kiada-config
data:
my-app.conf: |
key: value
status-message: This status message is set in the kiada-config config map2
my-second-app.conf: |
error: not in use
apiVersion: v1
kind: Pod
metadata:
labels:
run: kiada
name: kiada
spec:
containers:
- name: kiada
image: k8s.gcr.io/busybox
command: [ "/bin/sh", "-c", "tail -f /dev/null" ]
volumeMounts:
- mountPath: /html/my-app.conf
name: my-test
subPath: my-app.conf
volumes:
- name: my-test
configMap:
name: kiada-config
There is no file in your configmap i would suggest checking out the : https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#add-configmap-data-to-a-volume
configmap :
apiVersion: v1
kind: ConfigMap
metadata:
name: special-config
namespace: default
data:
SPECIAL_LEVEL: very
SPECIAL_TYPE: charm
POD deployment
apiVersion: v1
kind: Pod
metadata:
name: dapi-test-pod
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
command: [ "/bin/sh", "-c", "ls /etc/config/" ]
volumeMounts:
- name: config-volume
mountPath: /etc/config
volumes:
- name: config-volume
configMap:
name: special-config
restartPolicy: Never
When the pod runs, the command ls /etc/config/ produces the output below:
SPECIAL_LEVEL
SPECIAL_TYPE
if you want to inject configmap with a different file name you can use items
items:
- key: SPECIAL_LEVEL
path: keys
Example: https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#add-configmap-data-to-a-specific-path-in-the-volume

how to config imagePullSecrets in kubernetes pod with initial container

I am pulling image from server in kubernetes(v1.15.2) pod, I am using this config to auth my access in containers:
"imagePullSecrets": [
{
"name": "regcred"
}
]
but it seems not work with the initial containers, when pulling initial container,the kubernetes pod throw this error:
Failed to pull image "registry.cn-hangzhou.aliyuncs.com/app_k8s/fat/alpine-bash:3.8": rpc error: code = Unknown desc = Error response from daemon: pull access denied for registry.cn-hangzhou.aliyuncs.com/dabai_app_k8s/dabai_fat/alpine-bash, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
so how to specify the auth info with the initial containers of kubernetes pod to make it works? this is my full config:
kind: Deployment
apiVersion: apps/v1
metadata:
name: deployment-apollo-portal-server
namespace: sre
selfLink: /apis/apps/v1/namespaces/sre/deployments/deployment-apollo-portal-server
uid: bc8e94bb-524d-487e-b9bb-90624cfcace3
resourceVersion: '479747'
generation: 4
creationTimestamp: '2020-05-31T05:34:08Z'
labels:
app: deployment-apollo-portal-server
annotations:
deployment.kubernetes.io/revision: '4'
kubectl.kubernetes.io/last-applied-configuration: >
{"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{},"labels":{"app":"deployment-apollo-portal-server"},"name":"deployment-apollo-portal-server","namespace":"sre"},"spec":{"replicas":3,"selector":{"matchLabels":{"app":"pod-apollo-portal-server"}},"strategy":{"rollingUpdate":{"maxSurge":1,"maxUnavailable":1},"type":"RollingUpdate"},"template":{"metadata":{"labels":{"app":"pod-apollo-portal-server"}},"spec":{"affinity":{"podAntiAffinity":{"preferredDuringSchedulingIgnoredDuringExecution":[{"podAffinityTerm":{"labelSelector":{"matchExpressions":[{"key":"app","operator":"In","values":["pod-apollo-portal-server"]}]},"topologyKey":"kubernetes.io/hostname"},"weight":100}]}},"containers":[{"env":[{"name":"APOLLO_PORTAL_SERVICE_NAME","value":"service-apollo-portal-server.sre"}],"image":"apollo-portal-server:v1.0.0","imagePullPolicy":"IfNotPresent","livenessProbe":{"initialDelaySeconds":120,"periodSeconds":15,"tcpSocket":{"port":8070}},"name":"container-apollo-portal-server","ports":[{"containerPort":8070,"protocol":"TCP"}],"readinessProbe":{"initialDelaySeconds":10,"periodSeconds":5,"tcpSocket":{"port":8070}},"securityContext":{"privileged":true},"volumeMounts":[{"mountPath":"/apollo-portal-server/config/application-github.properties","name":"volume-configmap-apollo-portal-server","subPath":"application-github.properties"},{"mountPath":"/apollo-portal-server/config/apollo-env.properties","name":"volume-configmap-apollo-portal-server","subPath":"apollo-env.properties"}]}],"dnsPolicy":"ClusterFirst","initContainers":[{"command":["bash","-c","curl
--connect-timeout 2 --max-time 5 --retry 60 --retry-delay 1
--retry-max-time 120
service-apollo-admin-server-dev.sre:8090"],"image":"alpine-bash:3.8","name":"check-service-apollo-admin-server-dev"},{"command":["bash","-c","curl
--connect-timeout 2 --max-time 5 --retry 60 --retry-delay 1
--retry-max-time 120
service-apollo-admin-server-test-alpha.sre:8090"],"image":"alpine-bash:3.8","name":"check-service-apollo-admin-server-alpha"},{"command":["bash","-c","curl
--connect-timeout 2 --max-time 5 --retry 60 --retry-delay 1
--retry-max-time 120
service-apollo-admin-server-test-beta.sre:8090"],"image":"alpine-bash:3.8","name":"check-service-apollo-admin-server-beta"},{"command":["bash","-c","curl
--connect-timeout 2 --max-time 5 --retry 60 --retry-delay 1
--retry-max-time 120
service-apollo-admin-server-prod.sre:8090"],"image":"alpine-bash:3.8","name":"check-service-apollo-admin-server-prod"}],"restartPolicy":"Always","volumes":[{"configMap":{"items":[{"key":"application-github.properties","path":"application-github.properties"},{"key":"apollo-env.properties","path":"apollo-env.properties"}],"name":"configmap-apollo-portal-server"},"name":"volume-configmap-apollo-portal-server"}]}}}}
spec:
replicas: 3
selector:
matchLabels:
app: pod-apollo-portal-server
template:
metadata:
creationTimestamp: null
labels:
app: pod-apollo-portal-server
spec:
volumes:
- name: volume-configmap-apollo-portal-server
configMap:
name: configmap-apollo-portal-server
items:
- key: application-github.properties
path: application-github.properties
- key: apollo-env.properties
path: apollo-env.properties
defaultMode: 420
initContainers:
- name: check-service-apollo-admin-server-dev
image: >-
registry.cn-hangzhou.aliyuncs.com/dabai_app_k8s/dabai_fat/alpine-bash:3.8
command:
- bash
- '-c'
- >-
curl --connect-timeout 2 --max-time 5 --retry 60 --retry-delay 1
--retry-max-time 120 service-apollo-admin-server-dev.sre:8090
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
imagePullPolicy: IfNotPresent
- name: check-service-apollo-admin-server-alpha
image: >-
registry.cn-hangzhou.aliyuncs.com/dabai_app_k8s/dabai_fat/alpine-bash:3.8
command:
- bash
- '-c'
- >-
curl --connect-timeout 2 --max-time 5 --retry 60 --retry-delay 1
--retry-max-time 120
service-apollo-admin-server-test-alpha.sre:8090
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
imagePullPolicy: IfNotPresent
- name: check-service-apollo-admin-server-beta
image: >-
registry.cn-hangzhou.aliyuncs.com/dabai_app_k8s/dabai_fat/alpine-bash:3.8
command:
- bash
- '-c'
- >-
curl --connect-timeout 2 --max-time 5 --retry 60 --retry-delay 1
--retry-max-time 120
service-apollo-admin-server-test-beta.sre:8090
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
imagePullPolicy: IfNotPresent
- name: check-service-apollo-admin-server-prod
image: >-
registry.cn-hangzhou.aliyuncs.com/dabai_app_k8s/dabai_fat/alpine-bash:3.8
command:
- bash
- '-c'
- >-
curl --connect-timeout 2 --max-time 5 --retry 60 --retry-delay 1
--retry-max-time 120 service-apollo-admin-server-prod.sre:8090
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
imagePullPolicy: IfNotPresent
containers:
- name: container-apollo-portal-server
image: >-
registry.cn-hangzhou.aliyuncs.com/dabai_app_k8s/dabai_fat/apollo-portal-server:v1.0.0
ports:
- containerPort: 8070
protocol: TCP
env:
- name: APOLLO_PORTAL_SERVICE_NAME
value: service-apollo-portal-server.sre
resources: {}
volumeMounts:
- name: volume-configmap-apollo-portal-server
mountPath: /apollo-portal-server/config/application-github.properties
subPath: application-github.properties
- name: volume-configmap-apollo-portal-server
mountPath: /apollo-portal-server/config/apollo-env.properties
subPath: apollo-env.properties
livenessProbe:
tcpSocket:
port: 8070
initialDelaySeconds: 120
timeoutSeconds: 1
periodSeconds: 15
successThreshold: 1
failureThreshold: 3
readinessProbe:
tcpSocket:
port: 8070
initialDelaySeconds: 10
timeoutSeconds: 1
periodSeconds: 5
successThreshold: 1
failureThreshold: 3
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
imagePullPolicy: IfNotPresent
securityContext:
privileged: true
restartPolicy: Always
terminationGracePeriodSeconds: 30
dnsPolicy: ClusterFirst
securityContext: {}
imagePullSecrets:
- name: regcred
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- pod-apollo-portal-server
topologyKey: kubernetes.io/hostname
schedulerName: default-scheduler
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
revisionHistoryLimit: 10
progressDeadlineSeconds: 600
status:
observedGeneration: 4
replicas: 4
updatedReplicas: 2
unavailableReplicas: 4
conditions:
- type: Available
status: 'False'
lastUpdateTime: '2020-05-31T05:34:08Z'
lastTransitionTime: '2020-05-31T05:34:08Z'
reason: MinimumReplicasUnavailable
message: Deployment does not have minimum availability.
- type: Progressing
status: 'False'
lastUpdateTime: '2020-05-31T06:08:24Z'
lastTransitionTime: '2020-05-31T06:08:24Z'
reason: ProgressDeadlineExceeded
message: >-
ReplicaSet "deployment-apollo-portal-server-dc65dcf6b" has timed out
progressing.

Openshift Origin 1.5.1 Pod anti-affinity on DeploymentConfig not working

I have a strange issue where I am trying to apply a PodAntiAffinity to make sure that no 2 pods of the specific deploymentConfig ever end up on the same node:
I attempt to edit the dc with:
spec:
replicas: 1
selector:
app: server-config
deploymentconfig: server-config
strategy:
activeDeadlineSeconds: 21600
resources: {}
rollingParams:
intervalSeconds: 1
maxSurge: 25%
maxUnavailable: 25%
timeoutSeconds: 600
updatePeriodSeconds: 1
type: Rolling
template:
metadata:
creationTimestamp: null
labels:
app: server-config
deploymentconfig: server-config
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- server-config
topologyKey: "kubernetes.io/hostname"
but on saving that, I get a :
"/tmp/oc-edit-34z56.yaml" 106L, 3001C written
deploymentconfig "server-config" skipped
and the changes dont stick. My openshift/Kubernetes versions are:
[root#master1 ~]# oc version
oc v1.5.1
kubernetes v1.5.2+43a9be4
features: Basic-Auth GSSAPI Kerberos SPNEGO
Thanks in advance.
This seems to work, the syntax is wildly different and the "scheduler.alpha.kubernetes.io/affinity" annotation needs to be added to work:
spec:
replicas: 1
selector:
app: server-config
deploymentconfig: server-config
strategy:
activeDeadlineSeconds: 21600
resources: {}
rollingParams:
intervalSeconds: 1
maxSurge: 25%
maxUnavailable: 25%
timeoutSeconds: 600
updatePeriodSeconds: 1
type: Rolling
template:
metadata:
annotations:
scheduler.alpha.kubernetes.io/affinity: |
{
"podAntiAffinity": {
"requiredDuringSchedulingIgnoredDuringExecution": [{
"labelSelector": {
"matchExpressions": [{
"key": "app",
"operator": "In",
"values":["server-config"]
}]
},
"topologyKey": "kubernetes.io/hostname"
}]
}
}
Working as intended and spreading out properly between nodes:
[root#master1 ~]# oc get pods -o wide |grep server-config
server-config-22-4ktvf 1/1 Running 0 3h 10.1.1.73 10.0.4.101
server-config-22-fz31j 1/1 Running 0 3h 10.1.0.3 10.0.4.100
server-config-22-mrw09 1/1 Running 0 3h 10.1.2.64 10.0.4.102