Mount Dynamic created PV to multiple containers on the same pod - kubernetes

I am working on a use-case where I need to add a new container in jupyterhub pod , This new container (sidecontainer) monitors jupyterhub directories.
Jupyterhub container while comming up creates a dynamic PV , see the section below
singleuser:
podNameTemplate:
extraTolerations: []
nodeSelector: {}
extraNodeAffinity:
required: []
preferred: []
extraPodAffinity:
required: []
preferred: []
extraPodAntiAffinity:
required: []
preferred: []
networkTools:
image:
name: jupyterhub/k8s-network-tools
tag: "set-by-chartpress"
pullPolicy:
pullSecrets: []
resources: {}
cloudMetadata:
# block set to true will append a privileged initContainer using the
# iptables to block the sensitive metadata server at the provided ip.
blockWithIptables: true
ip: 169.254.169.254
networkPolicy:
enabled: true
ingress: []
egress:
# Required egress to communicate with the hub and DNS servers will be
# augmented to these egress rules.
#
# This default rule explicitly allows all outbound traffic from singleuser
# pods, except to a typical IP used to return metadata that can be used by
# someone with malicious intent.
- to:
- ipBlock:
cidr: 0.0.0.0/0
except:
- 169.254.169.254/32
interNamespaceAccessLabels: ignore
allowedIngressPorts: []
events: true
extraAnnotations: {}
extraLabels:
hub.jupyter.org/network-access-hub: "true"
extraFiles: {}
extraEnv: {}
lifecycleHooks: {}
initContainers: []
extraContainers: []
uid: 1000
fsGid: 100
serviceAccountName:
storage:
type: dynamic
extraLabels: {}
extraVolumes: []
extraVolumeMounts: []
static:
pvcName:
subPath: "{username}"
capacity: 10Gi
homeMountPath: /home/jovyan
dynamic:
storageClass:
pvcNameTemplate: claim-{username}{servername}
volumeNameTemplate: volume-{username}{servername}
storageAccessModes: [ReadWriteOnce]
image:
name: jupyterhub/k8s-singleuser-sample
tag: "set-by-chartpress"
pullPolicy:
pullSecrets: []
startTimeout: 300
cpu:
limit:
guarantee:
memory:
limit:
guarantee: 1G
extraResource:
limits: {}
guarantees: {}
cmd:
defaultUrl:
extraPodConfig: {}
profileList: []
I have added my new container in the extraContainer section of deployment file , my container does start but the dynamic PV is not mounted on that container.
Is the use-case I am trying to achieve technically possible at kubernetes level.
full yaml file is present here for reference
https://github.com/jupyterhub/zero-to-jupyterhub-k8s/blob/main/jupyterhub/values.yaml
Config Map for reference
singleuser:
baseUrl: /
cloudMetadata:
enabled: true
ip: xx.xx.xx.xx
cpu: {}
events: true
extraAnnotations: {}
extraConfigFiles:
config_files:
- cm_key: ''
content: ''
file_path: ''
- cm_key: ''
content: ''
file_path: ''
enabled: false
extraContainers:
- image: 'docker:19.03-rc-dind'
lifecycle:
postStart:
exec:
command:
- sh
- '-c'
- update-ca-certificates; echo Certificates Updated
name: dind
securityContext:
privileged: true
volumeMounts:
- mountPath: /var/lib/docker
name: dind-storage
- mountPath: /usr/local/share/ca-certificates/
name: docker-cert
extraEnv:
ACTUAL_HADOOP_CONF_DIR: ''
ACTUAL_HIVE_CONF_DIR: ''
ACTUAL_SPARK_CONF_DIR: ''
CDH_PARCEL_DIR: ''
DOCKER_HOST: ''
JAVA_HOME:
LIVY_URL: ''
SPARK2_PARCEL_DIR: ''
extraLabels:
hub.jupyter.org/network-access-hub: 'true'
extraNodeAffinity:
preferred: []
required: []
extraPodAffinity:
preferred: []
required: []
extraPodAntiAffinity:
preferred: []
required: []
extraPodConfig: {}
extraResource:
guarantees: {}
limits: {}
extraTolerations: []
fsGid: 0
image:
name: >-
/jupyterhub/jpt-spark-magic
pullPolicy: IfNotPresent
tag: xxx
imagePullSecret:
email: null
enabled: false
registry: null
username: null
initContainers: []
lifecycleHooks: {}
memory:
guarantee: 1G
networkPolicy:
egress:
- to:
- ipBlock:
cidr: 0.0.0.0/0
except:
- 169.254.169.254/32
enabled: false
ingress: []
networkTools:
image:
name: >-
/k8s-hub-multispawn
pullPolicy: IfNotPresent
tag: '12345'
nodeSelector: {}
profileList:
- description: Python for data enthusiasts
display_name: 0
kubespawner_override:
cmd:
- jpt-entry-cmd.sh
cpu_limit: 1
environment:
XYZ_SERVICE_URL: 'http://XYZ:8080'
CURL_CA_BUNDLE: /etc/ssl/certs/ca-certificates.crt
DOCKER_HOST: 'tcp://localhost:2375'
HADOOP_CONF_DIR: /etc/hadoop/conf
HADOOP_HOME: /usr/hdp/3.1.5.6091-7/hadoop/
HDP_DIR: /usr/hdp/3.1.5.6091-7
HDP_HOME_DIR: /usr/hdp/3.1.5.6091-7
HDP_VERSION: 3.1.5.6091-7
HIVE_CONF_DIR: /usr/hdp/3.1.5.6091-7/hive
HIVE_HOME: /usr/hdp/3.1.5.6091-7/hive
INTEGRATION_ENV: HDP3
JAVA_HOME: /usr/jdk64/jdk1.8.0_112
LD_LIBRARY_PATH: >-
/usr/hdp/3.1.5.6091-7/hadoop/lib/native:/usr/jdk64/jdk1.8.0_112/jre:/usr/hdp/3.1.5.6091-7/usr/lib/:/usr/hdp/3.1.5.6091-7/usr/lib/
LIVY_URL: 'http://ammaster01.fake.org:8999'
MLFLOW_TRACKING_URI: 'http://mlflow:5100'
NO_PROXY: mlflow
SPARK_CONF_DIR: /etc/spark2/conf
SPARK_HOME: /usr/hdp/3.1.5.6091-7/spark2
SPARK2_PARCEL_DIR: /usr/hdp/3.1.5.6091-7/spark2
TOOLS_BASE_PATH: /usr/local/bin
image: >-
/jupyterhub/jpt-spark-magic:1.1.2
mem_limit: 4096M
uid: 0
- description: R for data enthusiasts
display_name: 1
kubespawner_override:
cmd:
- start-all.sh
environment:
XYZ_SERVICE_URL: 'http://XYZ-service:8080'
DISABLE_AUTH: 'true'
XYZ: /home/rstudio/kitematic
image: '/jupyterhub/rstudio:364094'
uid: 0
- description: Python for data enthusiasts test2
display_name: 2
kubespawner_override:
cmd:
- jpt-entry-cmd.sh
cpu_limit: 4
environment:
XYZ_SERVICE_URL: 'http://XYZ-service:8080'
CURL_CA_BUNDLE: /etc/ssl/certs/ca-certificates.crt
DOCKER_HOST: 'tcp://localhost:2375'
HADOOP_CONF_DIR: /etc/hadoop/conf
HADOOP_HOME: /usr/hdp/3.1.5.6091-7/hadoop/
HDP_DIR: /usr/hdp/3.1.5.6091-7
HDP_HOME_DIR: /usr/hdp/3.1.5.6091-7
HDP_VERSION: 3.1.5.6091-7
HIVE_CONF_DIR: /usr/hdp/3.1.5.6091-7/hive
HIVE_HOME: /usr/hdp/3.1.5.6091-7/hive
INTEGRATION_ENV: HDP3
JAVA_HOME: /usr/jdk64/jdk1.8.0_112
LD_LIBRARY_PATH: >-
/usr/hdp/3.1.5.6091-7/hadoop/lib/native:/usr/jdk64/jdk1.8.0_112/jre:/usr/hdp/3.1.5.6091-7/usr/lib/:/usr/hdp/3.1.5.6091-7/usr/lib/
LIVY_URL: 'http://xyz:8999'
MLFLOW_TRACKING_URI: 'http://mlflow:5100'
NO_PROXY: mlflow
SPARK_CONF_DIR: /etc/spark2/conf
SPARK_HOME: /usr/hdp/3.1.5.6091-7/spark2
SPARK2_PARCEL_DIR: /usr/hdp/3.1.5.6091-7/spark2
TOOLS_BASE_PATH: /usr/local/bin
image: >-
/jupyterhub/jpt-spark-magic:1.1.2
mem_limit: 8192M
uid: 0
- description: Python for data enthusiasts test3
display_name: 3
kubespawner_override:
cmd:
- jpt-entry-cmd.sh
cpu_limit: 8
environment:
XYZ_SERVICE_URL: 'http://XYZ-service:8080'
CURL_CA_BUNDLE: /etc/ssl/certs/ca-certificates.crt
DOCKER_HOST: 'tcp://localhost:2375'
HADOOP_CONF_DIR: /etc/hadoop/conf
HADOOP_HOME: /usr/hdp/3.1.5.6091-7/hadoop/
HDP_DIR: /usr/hdp/3.1.5.6091-7
HDP_HOME_DIR: /usr/hdp/3.1.5.6091-7
HDP_VERSION: 3.1.5.6091-7
HIVE_CONF_DIR: /usr/hdp/3.1.5.6091-7/hive
HIVE_HOME: /usr/hdp/3.1.5.6091-7/hive
INTEGRATION_ENV: HDP3
JAVA_HOME: /usr/jdk64/jdk1.8.0_112
LD_LIBRARY_PATH: >-
/usr/hdp/3.1.5.6091-7/hadoop/lib/native:/usr/jdk64/jdk1.8.0_112/jre:/usr/hdp/3.1.5.6091-7/usr/lib/:/usr/hdp/3.1.5.6091-7/usr/lib/
LIVY_URL: 'http://fake.org:8999'
MLFLOW_TRACKING_URI: 'http://mlflow:5100'
NO_PROXY: mlflow
SPARK_CONF_DIR: /etc/spark2/conf
SPARK_HOME: /usr/hdp/3.1.5.6091-7/spark2
SPARK2_PARCEL_DIR: /usr/hdp/3.1.5.6091-7/spark2
TOOLS_BASE_PATH: /usr/local/bin
image: >-
/jupyterhub/jpt-spark-magic:1.1.2
mem_limit: 16384M
uid: 0
startTimeout: 300
storage:
capacity: 10Gi
dynamic:
pvcNameTemplate: 'claim-{username}{servername}'
storageAccessModes:
- ReadWriteOnce
storageClass: nfs-client
volumeNameTemplate: 'volume-{username}{servername}'
extraLabels: {}
extraVolumeMounts:
- mountPath: /etc/krb5.conf
name: krb
readOnly: true
- mountPath: /usr/jdk64/jdk1.8.0_112
name: java-home
readOnly: true
- mountPath: /xyz/conda/envs
name: xyz-conda-envs
readOnly: false
- mountPath: /usr/hdp/
name: bigdata
readOnly: true
subPath: usr-hdp
- mountPath: /etc/hadoop/
name: bigdata
readOnly: true
subPath: HDP
- mountPath: /etc/hive/
name: bigdata
readOnly: true
subPath: hdp-hive
- mountPath: /etc/spark2/
name: bigdata
readOnly: true
subPath: hdp-spark2
extraVolumes:
- emptyDir: {}
name: dind-storage
- name: docker-cert
secret:
secretName: docker-cert
- hostPath:
path: /var/lib/ut_xyz_ts/jdk1.8.0_112
type: Directory
name: java-home
- hostPath:
path: /xyz/conda/envs
type: Directory
name: xyz-conda-envs
- hostPath:
path: /etc/krb5.conf
type: File
name: krb
- name: bigdata
persistentVolumeClaim:
claimName: bigdata
homeMountPath: '/home/{username}'
static:
subPath: '{username}'
type: dynamic
uid: 0
Thanks in advance ..

For you question, whether technically two or more containers of same pod share same volume, the answer in Yes. Refer here - https://youtu.be/GQJP9QdHHs8?t=82 .
But you should have a volumeMount (refer example in the video as well) defined in your extra containers spec as well. If you can check that, or share output of kubectl describe deployment <your-deployment> I can confirm it.

Related

How to connect opensearch dashboards to SSO AzureAD

I'm trying to have SSO in opensearch-dashboards via openid to AzureAD.
Overally - there is no need to have an encrypted communication between opensearch and nodes, there is no need to have encrypted communication between dashboards and master pod. All I need is to have working SSO to Azure AD to see dashboards.
I got errors in dashboards pod like: "res":{"statusCode":302,"responseTime":746,"contentLength":9} and tags":["error","plugins","securityDashboards"],"pid":1,"message":"OpenId authentication failed: Error: [index_not_found_exception] no such index [_plugins], with { index=\"_plugins\" │ │ & resource.id=\"_plugins\" & resource.type=\"index_expression\" & index_uuid=\"_na_\" }"} and the browser tells me The page isn’t redirecting properly
With last try I got from the ingress pod the error: Service "default/opensearch-values-opensearch-dashboards" does not have any active Endpoint.
I really appreciate any advice what am I missing...
I use helm installation of opensearch to AWS EKS (with nginx-controller ingress to publish the adress)
In AD I have an app registered like https://<some_address>/auth/openid/login
Here are my actual helm values:
opensearch.yaml
---
clusterName: "opensearch-cluster"
nodeGroup: "master"
masterService: "opensearch-cluster-master"
roles:
- master
- ingest
- data
- remote_cluster_client
replicas: 3
minimumMasterNodes: 1
majorVersion: ""
global:
dockerRegistry: "<registry>"
opensearchHome: /usr/share/opensearch
config:
log4j2.properties: |
rootLogger.level = debug
opensearch.yml: |
cluster.name: opensearch-cluster
network.host: 0.0.0.0
plugins.security.disabled: true
plugins:
security:
ssl:
transport:
pemcert_filepath: esnode.pem
pemkey_filepath: esnode-key.pem
pemtrustedcas_filepath: root-ca.pem
enforce_hostname_verification: false
http:
enabled: false
pemcert_filepath: esnode.pem
pemkey_filepath: esnode-key.pem
pemtrustedcas_filepath: root-ca.pem
allow_unsafe_democertificates: true
allow_default_init_securityindex: true
authcz:
admin_dn:
- CN=kirk,OU=client,O=client,L=test,C=de
audit.type: internal_opensearch
enable_snapshot_restore_privilege: true
check_snapshot_restore_write_privileges: true
restapi:
roles_enabled: ["all_access", "security_rest_api_access"]
system_indices:
enabled: true
indices:
[
".opendistro-alerting-config",
".opendistro-alerting-alert*",
".opendistro-anomaly-results*",
".opendistro-anomaly-detector*",
".opendistro-anomaly-checkpoints",
".opendistro-anomaly-detection-state",
".opendistro-reports-*",
".opendistro-notifications-*",
".opendistro-notebooks",
".opendistro-asynchronous-search-response*",
]
extraEnvs: []
envFrom: []
secretMounts: []
hostAliases: []
image:
repository: "opensearchproject/opensearch"
tag: ""
pullPolicy: "IfNotPresent"
podAnnotations: {}
labels: {}
opensearchJavaOpts: "-Xmx512M -Xms512M"
resources:
requests:
cpu: "1000m"
memory: "100Mi"
initResources: {}
sidecarResources: {}
networkHost: "0.0.0.0"
rbac:
create: false
serviceAccountAnnotations: {}
serviceAccountName: ""
podSecurityPolicy:
create: false
name: ""
spec:
privileged: true
fsGroup:
rule: RunAsAny
runAsUser:
rule: RunAsAny
seLinux:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
volumes:
- secret
- configMap
- persistentVolumeClaim
- emptyDir
persistence:
enabled: true
enableInitChown: true
labels:
enabled: false
accessModes:
- ReadWriteOnce
size: 8Gi
annotations: {}
extraVolumes: []
extraVolumeMounts: []
extraContainers: []
extraInitContainers:
- name: sysctl
image: docker.io/bitnami/bitnami-shell:10-debian-10-r199
imagePullPolicy: "IfNotPresent"
command:
- /bin/bash
- -ec
- |
CURRENT=`sysctl -n vm.max_map_count`;
DESIRED="262144";
if [ "$DESIRED" -gt "$CURRENT" ]; then
sysctl -w vm.max_map_count=262144;
fi;
CURRENT=`sysctl -n fs.file-max`;
DESIRED="65536";
if [ "$DESIRED" -gt "$CURRENT" ]; then
sysctl -w fs.file-max=65536;
fi;
securityContext:
privileged: true
priorityClassName: ""
antiAffinityTopologyKey: "kubernetes.io/hostname"
antiAffinity: "soft"
nodeAffinity: {}
topologySpreadConstraints: []
podManagementPolicy: "Parallel"
enableServiceLinks: true
protocol: http
httpPort: 9200
transportPort: 9300
service:
labels: {}
labelsHeadless: {}
headless:
annotations: {}
type: ClusterIP
nodePort: ""
annotations: {}
httpPortName: http
transportPortName: transport
loadBalancerIP: ""
loadBalancerSourceRanges: []
externalTrafficPolicy: ""
updateStrategy: RollingUpdate
maxUnavailable: 1
podSecurityContext:
fsGroup: 1000
runAsUser: 1000
securityContext:
capabilities:
drop:
- ALL
runAsNonRoot: true
runAsUser: 1000
securityConfig:
enabled: true
path: "/usr/share/opensearch/plugins/opensearch-security/securityconfig"
actionGroupsSecret:
configSecret:
internalUsersSecret:
rolesSecret:
rolesMappingSecret:
tenantsSecret:
config:
securityConfigSecret: ""
dataComplete: true
data:
config.yml: |-
config:
dynamic:
authc:
basic_internal_auth_domain:
description: "Authenticate via HTTP Basic"
http_enabled: true
transport_enabled: true
order: 1
http_authenticator:
type: "basic"
challenge: false
authentication_backend:
type: "internal"
openid_auth_domain:
order: 0
http_enabled: true
transport_enabled: true
http_authenticator:
type: openid
challenge: false
config:
enable_ssl: true
verify_hostnames: false
subject_key: preferred_username
roles_key: roles
openid_connect_url: https://login.microsoftonline.com/<ms_id>/v2.0/.well-known/openid-configuration
authentication_backend:
type: noop
roles_mapping.yml: |-
all_access
reserved: false
backend_roles:
- "admin"
description: "Maps admin to all_access"
terminationGracePeriod: 120
sysctlVmMaxMapCount: 262144
readinessProbe:
failureThreshold: 3
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 3
timeoutSeconds: 2000
schedulerName: ""
imagePullSecrets:
- name: regcred
nodeSelector: {}
tolerations: []
ingress:
enabled: false
annotations: {}
path: /
hosts:
- chart-example.local
tls: []
nameOverride: ""
fullnameOverride: ""
masterTerminationFix: false
lifecycle: {}
keystore: []
networkPolicy:
create: false
http:
enabled: false
fsGroup: ""
sysctl:
enabled: false
plugins:
enabled: false
installList: []
extraObjects: []
opensearch-dashboards.yaml
---
opensearchHosts: "http://opensearch-cluster-master:9200"
replicaCount: 1
image:
repository: "<registry>"
tag: "1.3.1"
pullPolicy: "IfNotPresent"
imagePullSecrets:
- name: regcred
nameOverride: ""
fullnameOverride: ""
serviceAccount:
create: true
annotations: {}
name: ""
rbac:
create: true
secretMounts: []
podAnnotations: {}
extraEnvs: []
envFrom: []
extraVolumes: []
extraVolumeMounts: []
extraInitContainers: ""
extraContainers: ""
podSecurityContext: {}
securityContext:
capabilities:
drop:
- ALL
runAsNonRoot: true
runAsUser: 1000
config:
opensearch_dashboards.yml: |
opensearch_security.cookie.secure: false
opensearch_security.auth.type: openid
opensearch_security.openid.client_id: <client_id>
opensearch_security.openid.client_secret: <client_secret>
opensearch_security.openid.base_redirect_url: https://<some_aws_id>.elb.amazonaws.com
opensearch_security.openid.connect_url: https://login.microsoftonline.com/<MS id>/v2.0/.well-known/openid-configuration
priorityClassName: ""
opensearchAccount:
secret: ""
keyPassphrase:
enabled: false
labels: {}
hostAliases: []
serverHost: "0.0.0.0"
service:
type: ClusterIP
port: 5601
loadBalancerIP: ""
nodePort: ""
labels: {}
annotations: {}
loadBalancerSourceRanges: []
httpPortName: http
ingress:
enabled: false
annotations: {}
hosts:
- host: chart-example.local
paths:
- path: /
backend:
serviceName: chart-example.local
servicePort: 80
tls: []
resources:
requests:
cpu: "100m"
memory: "512M"
limits:
cpu: "100m"
memory: "512M"
autoscaling:
enabled: false
minReplicas: 1
maxReplicas: 10
targetCPUUtilizationPercentage: 80
updateStrategy:
type: "Recreate"
nodeSelector: {}
tolerations: []
affinity: {}
extraObjects: []

Error on Telegraf Helm Chart update: Error parsing data

Im trying to deploy telegraf helm chart on kubernetes.
helm upgrade --install telegraf-instance -f values.yaml influxdata/telegraf
When I add modbus input plugin with holding_register i get error
[telegraf] Error running agent: Error loading config file /etc/telegraf/telegraf.conf: Error parsing data: line 49: key `name’ is in conflict with line 2fd
my values.yaml like below
## Default values.yaml for Telegraf
## This is a YAML-formatted file.
## ref: https://hub.docker.com/r/library/telegraf/tags/
replicaCount: 1
image:
repo: "telegraf"
tag: "1.21.4"
pullPolicy: IfNotPresent
podAnnotations: {}
podLabels: {}
imagePullSecrets: []
args: []
env:
- name: HOSTNAME
value: "telegraf-polling-service"
resources: {}
nodeSelector: {}
affinity: {}
tolerations: []
service:
enabled: true
type: ClusterIP
annotations: {}
rbac:
create: true
clusterWide: false
rules: []
serviceAccount:
create: false
name:
annotations: {}
config:
agent:
interval: 60s
round_interval: true
metric_batch_size: 1000000
metric_buffer_limit: 100000000
collection_jitter: 0s
flush_interval: 60s
flush_jitter: 0s
precision: ''
hostname: '9825128'
omit_hostname: false
processors:
- enum:
mapping:
field: "status"
dest: "status_code"
value_mappings:
healthy: 1
problem: 2
critical: 3
inputs:
- modbus:
name: "PS MAIN ENGINE"
controller: 'tcp://192.168.0.101:502'
slave_id: 1
holding_registers:
- name: "Coolant Level"
byte_order: CDAB
data_type: FLOAT32
scale: 0.001
address: [51410, 51411]
- modbus:
name: "SB MAIN ENGINE"
controller: 'tcp://192.168.0.102:502'
slave_id: 1
holding_registers:
- name: "Coolant Level"
byte_order: CDAB
data_type: FLOAT32
scale: 0.001
address: [51410, 51411]
outputs:
- influxdb_v2:
token: token
organization: organisation
bucket: bucket
urls:
- "url"
metrics:
health:
enabled: true
service_address: "http://:8888"
threshold: 5000.0
internal:
enabled: true
collect_memstats: false
pdb:
create: true
minAvailable: 1
Problem resolved by doing the following steps
deleted config section of my values.yaml
added my telegraf.conf to /additional_config path
added configmap to kubernetes with the following command
kubectl create configmap external-config --from-file=/additional_config
added the following command to values.yaml
volumes:
- name: my-config
configMap:
name: external-config
volumeMounts:
- name: my-config
mountPath: /additional_config
args:
- "--config=/etc/telegraf/telegraf.conf"
- "--config-directory=/additional_config"

Unable to deploy mongodb community operator in openshift

I'm trying to deploy the mongodb community operator in openshift 3.11, using the following commands:
git clone https://github.com/mongodb/mongodb-kubernetes-operator.git
cd mongodb-kubernetes-operator
oc new-project mongodb
oc create -f deploy/crds/mongodb.com_mongodb_crd.yaml -n mongodb
oc create -f deploy/operator/role.yaml -n mongodb
oc create -f deploy/operator/role_binding.yaml -n mongodb
oc create -f deploy/operator/service_account.yaml -n mongodb
oc apply -f deploy/openshift/operator_openshift.yaml -n mongodb
oc apply -f deploy/crds/mongodb.com_v1_mongodb_openshift_cr.yaml -n mongodb
Operator pod is successfully running, but the mongodb replicaset pods do not spin up. Error is as follows:
[kubenode#master mongodb-kubernetes-operator]$ oc get pods
NAME READY STATUS RESTARTS AGE
example-openshift-mongodb-0 1/2 CrashLoopBackOff 4 2m
mongodb-kubernetes-operator-66bfcbcf44-9xvj7 1/1 Running 0 2m
[kubenode#master mongodb-kubernetes-operator]$ oc logs -f example-openshift-mongodb-0 -c mongodb-agent
panic: Failed to get current user: user: unknown userid 1000510000
goroutine 1 [running]:
com.tengen/cm/util.init.3()
/data/mci/2f46ec94982c5440960d2b2bf2b6ae15/mms-automation/build/go-dependencies/src/com.tengen/cm/util/user.go:14 +0xe5
I have gone through all the issues raised on the mongodb-kubernetes-operator repository which are related to this issue (reference), and found a suggestion to set the MANAGED_SECURITY_CONTEXT environment variable to true in the operator, mongodb and mongodb-agent containers.
I have done so for all of these containers, but am still facing the same issue.
Here is the confirmation that the environment variables are correctly set:
[kubenode#master mongodb-kubernetes-operator]$ oc set env statefulset.apps/example-openshift-mongodb --list
# statefulsets/example-openshift-mongodb, container mongodb-agent
AGENT_STATUS_FILEPATH=/var/log/mongodb-mms-automation/healthstatus/agent-health-status.json
AUTOMATION_CONFIG_MAP=example-openshift-mongodb-config
HEADLESS_AGENT=true
MANAGED_SECURITY_CONTEXT=true
# POD_NAMESPACE from field path metadata.namespace
# statefulsets/example-openshift-mongodb, container mongod
AGENT_STATUS_FILEPATH=/healthstatus/agent-health-status.json
MANAGED_SECURITY_CONTEXT=true
[kubenode#master mongodb-kubernetes-operator]$ oc set env deployment.apps/mongodb-kubernetes-operator --list
# deployments/mongodb-kubernetes-operator, container mongodb-kubernetes-operator
# WATCH_NAMESPACE from field path metadata.namespace
# POD_NAME from field path metadata.name
MANAGED_SECURITY_CONTEXT=true
OPERATOR_NAME=mongodb-kubernetes-operator
AGENT_IMAGE=quay.io/mongodb/mongodb-agent:10.19.0.6562-1
VERSION_UPGRADE_HOOK_IMAGE=quay.io/mongodb/mongodb-kubernetes-operator-version-upgrade-post-start-hook:1.0.2
Operator Information
Operator Version: 0.3.0
MongoDB Image used: 4.2.6
Cluster Information
[kubenode#master mongodb-kubernetes-operator]$ openshift version
openshift v3.11.0+62803d0-1
[kubenode#master mongodb-kubernetes-operator]$ kubectl version
Client Version: version.Info{Major:"1", Minor:"11+", GitVersion:"v1.11.0+d4cacc0", GitCommit:"d4cacc0", GitTreeState:"clean", BuildDate:"2018-10-15T09:45:30Z", GoVersion:"go1.10.2", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"11+", GitVersion:"v1.11.0+d4cacc0", GitCommit:"d4cacc0", GitTreeState:"clean", BuildDate:"2020-12-07T17:59:40Z", GoVersion:"go1.10.8", Compiler:"gc", Platform:"linux/amd64"}
Update
When I check the replica pod yaml (see below), I see three occurrences of runAsUser security context set as 1000510000. I'm not sure how, but this is being set even though I'm not setting it manually.
[kubenode#master mongodb-kubernetes-operator]$ oc get -o yaml pod example-openshift-mongodb-0
apiVersion: v1
kind: Pod
metadata:
annotations:
openshift.io/scc: restricted
creationTimestamp: 2021-01-19T07:45:05Z
generateName: example-openshift-mongodb-
labels:
app: example-openshift-mongodb-svc
controller-revision-hash: example-openshift-mongodb-6549495b
statefulset.kubernetes.io/pod-name: example-openshift-mongodb-0
name: example-openshift-mongodb-0
namespace: mongodb
ownerReferences:
- apiVersion: apps/v1
blockOwnerDeletion: true
controller: true
kind: StatefulSet
name: example-openshift-mongodb
uid: 3e91eb40-5a2a-11eb-a5e0-0050569b1f59
resourceVersion: "15616863"
selfLink: /api/v1/namespaces/mongodb/pods/example-openshift-mongodb-0
uid: 3ea17a28-5a2a-11eb-a5e0-0050569b1f59
spec:
containers:
- command:
- agent/mongodb-agent
- -cluster=/var/lib/automation/config/cluster-config.json
- -skipMongoStart
- -noDaemonize
- -healthCheckFilePath=/var/log/mongodb-mms-automation/healthstatus/agent-health-status.json
- -serveStatusPort=5000
- -useLocalMongoDbTools
env:
- name: AGENT_STATUS_FILEPATH
value: /var/log/mongodb-mms-automation/healthstatus/agent-health-status.json
- name: AUTOMATION_CONFIG_MAP
value: example-openshift-mongodb-config
- name: HEADLESS_AGENT
value: "true"
- name: MANAGED_SECURITY_CONTEXT
value: "true"
- name: POD_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
image: quay.io/mongodb/mongodb-agent:10.19.0.6562-1
imagePullPolicy: Always
name: mongodb-agent
readinessProbe:
exec:
command:
- /var/lib/mongodb-mms-automation/probes/readinessprobe
failureThreshold: 60
initialDelaySeconds: 5
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
resources: {}
securityContext:
capabilities:
drop:
- KILL
- MKNOD
- SETGID
- SETUID
runAsUser: 1000510000
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/lib/automation/config
name: automation-config
readOnly: true
- mountPath: /data
name: data-volume
- mountPath: /var/lib/mongodb-mms-automation/authentication
name: example-openshift-mongodb-agent-scram-credentials
- mountPath: /var/log/mongodb-mms-automation/healthstatus
name: healthstatus
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: mongodb-kubernetes-operator-token-lr9l4
readOnly: true
- command:
- /bin/sh
- -c
- |2
# run post-start hook to handle version changes
/hooks/version-upgrade
# wait for config to be created by the agent
while [ ! -f /data/automation-mongod.conf ]; do sleep 3 ; done ; sleep 2 ;
# start mongod with this configuration
exec mongod -f /data/automation-mongod.conf ;
env:
- name: AGENT_STATUS_FILEPATH
value: /healthstatus/agent-health-status.json
- name: MANAGED_SECURITY_CONTEXT
value: "true"
image: mongo:4.2.6
imagePullPolicy: IfNotPresent
name: mongod
resources: {}
securityContext:
capabilities:
drop:
- KILL
- MKNOD
- SETGID
- SETUID
runAsUser: 1000510000
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /data
name: data-volume
- mountPath: /var/lib/mongodb-mms-automation/authentication
name: example-openshift-mongodb-agent-scram-credentials
- mountPath: /healthstatus
name: healthstatus
- mountPath: /hooks
name: hooks
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: mongodb-kubernetes-operator-token-lr9l4
readOnly: true
dnsPolicy: ClusterFirst
hostname: example-openshift-mongodb-0
imagePullSecrets:
- name: mongodb-kubernetes-operator-dockercfg-jhplw
initContainers:
- command:
- cp
- version-upgrade-hook
- /hooks/version-upgrade
image: quay.io/mongodb/mongodb-kubernetes-operator-version-upgrade-post-start-hook:1.0.2
imagePullPolicy: Always
name: mongod-posthook
resources: {}
securityContext:
capabilities:
drop:
- KILL
- MKNOD
- SETGID
- SETUID
runAsUser: 1000510000
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /hooks
name: hooks
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: mongodb-kubernetes-operator-token-lr9l4
readOnly: true
nodeName: node1.192.168.27.116.nip.io
nodeSelector:
node-role.kubernetes.io/compute: "true"
priority: 0
restartPolicy: Always
schedulerName: default-scheduler
securityContext:
fsGroup: 1000510000
seLinuxOptions:
level: s0:c23,c2
serviceAccount: mongodb-kubernetes-operator
serviceAccountName: mongodb-kubernetes-operator
subdomain: example-openshift-mongodb-svc
terminationGracePeriodSeconds: 30
volumes:
- name: data-volume
persistentVolumeClaim:
claimName: data-volume-example-openshift-mongodb-0
- name: automation-config
secret:
defaultMode: 416
secretName: example-openshift-mongodb-config
- name: example-openshift-mongodb-agent-scram-credentials
secret:
defaultMode: 384
secretName: example-openshift-mongodb-agent-scram-credentials
- emptyDir: {}
name: healthstatus
- emptyDir: {}
name: hooks
- name: mongodb-kubernetes-operator-token-lr9l4
secret:
defaultMode: 420
secretName: mongodb-kubernetes-operator-token-lr9l4
status:
conditions:
- lastProbeTime: null
lastTransitionTime: 2021-01-19T07:46:45Z
status: "True"
type: Initialized
- lastProbeTime: null
lastTransitionTime: 2021-01-19T07:46:39Z
message: 'containers with unready status: [mongodb-agent]'
reason: ContainersNotReady
status: "False"
type: Ready
- lastProbeTime: null
lastTransitionTime: null
message: 'containers with unready status: [mongodb-agent]'
reason: ContainersNotReady
status: "False"
type: ContainersReady
- lastProbeTime: null
lastTransitionTime: 2021-01-19T07:45:05Z
status: "True"
type: PodScheduled
containerStatuses:
- containerID: docker://bd3ede9178bb78267bc19d1b5da0915d3bcd1d4dcee3e142c7583424bd2aa777
image: docker.io/mongo:4.2.6
imageID: docker-pullable://docker.io/mongo#sha256:c880f6b56f443bb4d01baa759883228cd84fa8d78fa1a36001d1c0a0712b5a07
lastState: {}
name: mongod
ready: true
restartCount: 0
state:
running:
startedAt: 2021-01-19T07:46:55Z
- containerID: docker://5e39c0b6269b8231bbf9cabb4ff3457d9f91e878eff23953e318a9475fb8a90e
image: quay.io/mongodb/mongodb-agent:10.19.0.6562-1
imageID: docker-pullable://quay.io/mongodb/mongodb-agent#sha256:790c2670ef7cefd61cfaabaf739de16dbd2e07dc3b539add0da21ab7d5ac7626
lastState:
terminated:
containerID: docker://5e39c0b6269b8231bbf9cabb4ff3457d9f91e878eff23953e318a9475fb8a90e
exitCode: 2
finishedAt: 2021-01-19T19:39:58Z
reason: Error
startedAt: 2021-01-19T19:39:58Z
name: mongodb-agent
ready: false
restartCount: 144
state:
waiting:
message: Back-off 5m0s restarting failed container=mongodb-agent pod=example-openshift-mongodb-0_mongodb(3ea17a28-5a2a-11eb-a5e0-0050569b1f59)
reason: CrashLoopBackOff
hostIP: 192.168.27.116
initContainerStatuses:
- containerID: docker://7c31cef2a68e3e6100c2cc9c83e3780313f1e8ab43bebca79ad4d48613f124bd
image: quay.io/mongodb/mongodb-kubernetes-operator-version-upgrade-post-start-hook:1.0.2
imageID: docker-pullable://quay.io/mongodb/mongodb-kubernetes-operator-version-upgrade-post-start-hook#sha256:e99105b1c54e12913ddaf470af8025111a6e6e4c8917fc61be71d1bc0328e7d7
lastState: {}
name: mongod-posthook
ready: true
restartCount: 0
state:
terminated:
containerID: docker://7c31cef2a68e3e6100c2cc9c83e3780313f1e8ab43bebca79ad4d48613f124bd
exitCode: 0
finishedAt: 2021-01-19T07:46:45Z
reason: Completed
startedAt: 2021-01-19T07:46:44Z
phase: Running
podIP: 10.129.0.119
qosClass: BestEffort
startTime: 2021-01-19T07:46:39Z

Airflow on Kubernetes: Errno 13 - Permission denied: '/opt/airflow/logs/scheduler

I am running Airflow on Kubernetes from the stable helm chart. I'm running this in an AWS environment. This error exists with and without mounting any external volumes for log storage. I tried to set the configuration of the [logs] section to point to an EFS volume that I created. The PV gets mounted through a PVC but my containers are crashing (scheduler and web) due to the following error:
*** executing Airflow initdb...
Unable to load the config, contains a configuration error.
Traceback (most recent call last):
File "/usr/local/lib/python3.6/logging/config.py", line 565, in configure
handler = self.configure_handler(handlers[name])
File "/usr/local/lib/python3.6/logging/config.py", line 738, in configure_handler
result = factory(**kwargs)
File "/home/airflow/.local/lib/python3.6/site-packages/airflow/utils/log/file_processor_handler.py", line 50, in __init__
os.makedirs(self._get_log_directory())
File "/usr/local/lib/python3.6/os.py", line 220, in makedirs
mkdir(name, mode)
PermissionError: [Errno 13] Permission denied: '/opt/airflow/logs/scheduler/2020-08-20'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/airflow/.local/bin/airflow", line 25, in <module>
from airflow.configuration import conf
File "/home/airflow/.local/lib/python3.6/site-packages/airflow/__init__.py", line 47, in <module>
settings.initialize()
File "/home/airflow/.local/lib/python3.6/site-packages/airflow/settings.py", line 374, in initialize
LOGGING_CLASS_PATH = configure_logging()
File "/home/airflow/.local/lib/python3.6/site-packages/airflow/logging_config.py", line 68, in configure_logging
raise e
File "/home/airflow/.local/lib/python3.6/site-packages/airflow/logging_config.py", line 63, in configure_logging
dictConfig(logging_config)
File "/usr/local/lib/python3.6/logging/config.py", line 802, in dictConfig
dictConfigClass(config).configure()
File "/usr/local/lib/python3.6/logging/config.py", line 573, in configure
'%r: %s' % (name, e))
ValueError: Unable to configure handler 'processor': [Errno 13] Permission denied: '/opt/airflow/logs/scheduler/2020-08-20'
Persistent volume (created manually not from the stable/airflow chart)
apiVersion: v1
kind: PersistentVolume
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"PersistentVolume","metadata":{"annotations":{},"name":"efs-pv"},"spec":{"accessModes":["ReadWriteMany"],"capacity":{"storage":"5Gi"},"csi":{"driver":"efs.csi.aws.com","volumeHandle":"fs-e476a166"},"persistentVolumeReclaimPolicy":"Retain","storageClassName":"efs-sc","volumeMode":"Filesystem"}}
pv.kubernetes.io/bound-by-controller: "yes"
creationTimestamp: "2020-08-20T15:47:21Z"
finalizers:
- kubernetes.io/pv-protection
name: efs-pv
resourceVersion: "49476860"
selfLink: /api/v1/persistentvolumes/efs-pv
uid: 45d9f5ea-66c1-493e-a2f5-03e17f397747
spec:
accessModes:
- ReadWriteMany
capacity:
storage: 5Gi
claimRef:
apiVersion: v1
kind: PersistentVolumeClaim
name: efs-claim
namespace: airflow
resourceVersion: "49476857"
uid: 354103ea-f8a9-47f1-a7cf-8f449f9a2e8b
csi:
driver: efs.csi.aws.com
volumeHandle: fs-e476a166
persistentVolumeReclaimPolicy: Retain
storageClassName: efs-sc
volumeMode: Filesystem
status:
phase: Bound
Persistent Volume Claim for logs (created manually not from the stable/airflow chart):
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"efs-claim","namespace":"airflow"},"spec":{"accessModes":["ReadWriteMany"],"resources":{"requests":{"storage":"5Gi"}},"storageClassName":"efs-sc"}}
pv.kubernetes.io/bind-completed: "yes"
pv.kubernetes.io/bound-by-controller: "yes"
creationTimestamp: "2020-08-20T15:47:46Z"
finalizers:
- kubernetes.io/pvc-protection
name: efs-claim
namespace: airflow
resourceVersion: "49476866"
selfLink: /api/v1/namespaces/airflow/persistentvolumeclaims/efs-claim
uid: 354103ea-f8a9-47f1-a7cf-8f449f9a2e8b
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi
storageClassName: efs-sc
volumeMode: Filesystem
volumeName: efs-pv
status:
accessModes:
- ReadWriteMany
capacity:
storage: 5Gi
phase: Bound
My values.yaml below:
airflow:
image:
repository: apache/airflow
tag: 1.10.10-python3.6
## values: Always or IfNotPresent
pullPolicy: IfNotPresent
pullSecret: ""
executor: KubernetesExecutor
fernetKey: "XXXXXXXXXHIVb8jK6lfmSAvx4mO6Arehnc="
config:
AIRFLOW__CORE__REMOTE_LOGGING: "True"
AIRFLOW__CORE__REMOTE_BASE_LOG_FOLDER: "s3://mybucket/airflow/logs"
AIRFLOW__CORE__REMOTE_LOG_CONN_ID: "MyS3Conn"
AIRFLOW__KUBERNETES__WORKER_CONTAINER_REPOSITORY: "apache/airflow"
AIRFLOW__KUBERNETES__WORKER_CONTAINER_TAG: "1.10.10-python3.6"
AIRFLOW__KUBERNETES__WORKER_CONTAINER_IMAGE_PULL_POLICY: "IfNotPresent"
AIRFLOW__KUBERNETES__WORKER_PODS_CREATION_BATCH_SIZE: "10"
AIRFLOW__KUBERNETES__LOGS_VOLUME_CLAIM: "efs-claim"
AIRFLOW__KUBERNETES__GIT_REPO: "git#github.com:org/myrepo.git"
AIRFLOW__KUBERNETES__GIT_BRANCH: "develop"
AIRFLOW__KUBERNETES__GIT_DAGS_FOLDER_MOUNT_POINT: "/opt/airflow/dags"
AIRFLOW__KUBERNETES__DAGS_VOLUME_SUBPATH: "repo/"
AIRFLOW__KUBERNETES__GIT_SSH_KEY_SECRET_NAME: "airflow-git-keys"
AIRFLOW__KUBERNETES__NAMESPACE: "airflow"
AIRFLOW__KUBERNETES__DELETE_WORKER_PODS: "True"
AIRFLOW__KUBERNETES__RUN_AS_USER: "50000"
AIRFLOW__CORE__LOAD_EXAMPLES: "False"
AIRFLOW__SCHEDULER__DAG_DIR_LIST_INTERVAL: "60"
AIRFLOW__KUBERNETES__WORKER_SERVICE_ACCOUNT_NAME: "airflow"
podAnnotations: {}
extraEnv: []
extraConfigmapMounts: []
extraContainers: []
extraPipPackages: []
extraVolumeMounts: []
extraVolumes: []
scheduler:
resources: {}
nodeSelector: {}
affinity: {}
tolerations: []
labels: {}
podLabels: {}
annotations: {}
podAnnotations: {}
podDisruptionBudget:
enabled: true
maxUnavailable: "100%"
minAvailable: ""
connections:
- id: MyS3Conn
type: aws
extra: |
{
"aws_access_key_id": "XXXXXXXXX",
"aws_secret_access_key": "XXXXXXXX",
"region_name":"us-west-1"
}
refreshConnections: true
variables: |
{}
pools: |
{}
numRuns: -1
initdb: true
preinitdb: false
initialStartupDelay: 0
extraInitContainers: []
web:
resources: {}
replicas: 1
nodeSelector: {}
affinity: {}
tolerations: []
labels: {}
podLabels: {}
annotations: {}
podAnnotations: {}
service:
annotations: {}
sessionAffinity: "None"
sessionAffinityConfig: {}
type: ClusterIP
externalPort: 8080
loadBalancerIP: ""
loadBalancerSourceRanges: []
nodePort:
http: ""
baseUrl: "http://localhost:8080"
serializeDAGs: false
extraPipPackages: []
initialStartupDelay: 0
minReadySeconds: 5
readinessProbe:
enabled: false
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 1
successThreshold: 1
failureThreshold: 3
livenessProbe:
enabled: true
scheme: HTTP
initialDelaySeconds: 300
periodSeconds: 30
timeoutSeconds: 3
successThreshold: 1
failureThreshold: 2
secretsDir: /var/airflow/secrets
secrets: []
secretsMap:
workers:
enabled: false
resources: {}
replicas: 1
nodeSelector: {}
affinity: {}
tolerations: []
labels: {}
podLabels: {}
annotations: {}
podAnnotations: {}
autoscaling:
enabled: false
maxReplicas: 2
metrics: []
initialStartupDelay: 0
celery:
instances: 1
gracefullTermination: false
gracefullTerminationPeriod: 600
terminationPeriod: 60
secretsDir: /var/airflow/secrets
secrets: []
secretsMap:
flower:
enabled: false
resources: {}
nodeSelector: {}
affinity: {}
tolerations: []
labels: {}
podLabels: {}
annotations: {}
podAnnotations: {}
basicAuthSecret: ""
basicAuthSecretKey: ""
urlPrefix: ""
service:
annotations: {}
type: ClusterIP
externalPort: 5555
loadBalancerIP: ""
loadBalancerSourceRanges: []
nodePort:
http: ""
initialStartupDelay: 0
extraConfigmapMounts: []
logs:
path: /opt/airflow/logs
persistence:
enabled: true
existingClaim: efs-claim
subPath: ""
storageClass: efs-sc
accessMode: ReadWriteMany
size: 1Gi
dags:
path: /opt/airflow/dags
doNotPickle: false
installRequirements: false
persistence:
enabled: false
existingClaim: ""
subPath: ""
storageClass: ""
accessMode: ReadOnlyMany
size: 1Gi
git:
url: git#github.com:org/myrepo.git
ref: develop
secret: airflow-git-keys
sshKeyscan: false
privateKeyName: id_rsa
repoHost: github.com
repoPort: 22
gitSync:
enabled: true
resources: {}
image:
repository: alpine/git
tag: latest
pullPolicy: Always
refreshTime: 60
initContainer:
enabled: false
resources: {}
image:
repository: alpine/git
tag: latest
pullPolicy: Always
mountPath: "/dags"
syncSubPath: ""
ingress:
enabled: false
web:
annotations: {}
path: ""
host: ""
livenessPath: ""
tls:
enabled: false
secretName: ""
precedingPaths: []
succeedingPaths: []
flower:
annotations: {}
path: ""
host: ""
livenessPath: ""
tls:
enabled: false
secretName: ""
rbac:
create: true
serviceAccount:
create: true
name: ""
annotations: {}
extraManifests: []
postgresql:
enabled: true
postgresqlDatabase: airflow
postgresqlUsername: postgres
postgresqlPassword: airflow
existingSecret: ""
existingSecretKey: "postgresql-password"
persistence:
enabled: true
storageClass: ""
accessModes:
- ReadWriteOnce
size: 5Gi
externalDatabase:
type: postgres
host: localhost
port: 5432
database: airflow
user: airflow
passwordSecret: ""
passwordSecretKey: "postgresql-password"
redis:
enabled: false
password: airflow
existingSecret: ""
existingSecretKey: "redis-password"
cluster:
enabled: false
slaveCount: 1
master:
resources: {}
persistence:
enabled: false
storageClass: ""
accessModes:
- ReadWriteOnce
size: 8Gi
slave:
resources: {}
persistence:
enabled: false
storageClass: ""
accessModes:
- ReadWriteOnce
size: 8Gi
externalRedis:
host: localhost
port: 6379
databaseNumber: 1
passwordSecret: ""
passwordSecretKey: "redis-password"
serviceMonitor:
enabled: false
selector:
prometheus: kube-prometheus
path: /admin/metrics
interval: "30s"
prometheusRule:
enabled: false
additionalLabels: {}
groups: []
I'm not really sure what to do here if anyone knows how to fix the permission error.
I have had this issue with the Google Cloud Plateform and the helm airflow 1.2.0 chart (which uses airflow 2).
What ended up working was:
extraInitContainers:
- name: fix-volume-logs-permissions
image: busybox
command: [ "sh", "-c", "chown -R 50000:0 /opt/airflow/logs/" ]
securityContext:
runAsUser: 0
volumeMounts:
- mountPath: /opt/airflow/logs/
name: logs
by tweaking based on Ajay's answer. Please note that:
the values 50000:0 are based on uid and gid setup in your values.yaml
you need to use extraInitContainers under scheduler and not worker
"logs" seems to be the volume name automatically used by the helm logging config when enabled
Security context was necessary for me or else the chown failed due to unprivileged rights
You can use extraInitContainers with scheduler to change the permission, something like this.
extraInitContainers:
- name: volume-logs
image: busybox
command: ["sh", "-c", "chown -R 50000:50000 /opt/airflow/logs/"]
volumeMounts:
- mountPath: /opt/airflow/logs/
name: logs-data
This will change permission of the mount point.
you can try modify workers.persistence.fixPermissions: true in values.yaml, it is OK.

How to change kube-controller-manager in kubernetes

I installed some packages on kube-controller-manager and copy some file to it.
If kube-controller-manager is restarted all of my new configs disappear, but I want to new kube-controller-manager to be loaded after restarting it.
How can I persist my new packages and file in kube-controller-manager?
you can change it from
nano /etc/kubernetes/manifests/kube-controller-manager.yaml
and change image part
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
component: kube-controller-manager
tier: control-plane
name: kube-controller-manager
namespace: kube-system
spec:
containers:
- command:
- kube-controller-manager
- --allocate-node-cidrs=true
- --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf
- --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf
- --bind-address=127.0.0.1
- --client-ca-file=/etc/kubernetes/pki/ca.crt
- --cluster-cidr=10.244.0.0/16
- --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt
- --cluster-signing-key-file=/etc/kubernetes/pki/ca.key
- --controllers=*,bootstrapsigner,tokencleaner
- --kubeconfig=/etc/kubernetes/controller-manager.conf
- --leader-elect=true
- --node-cidr-mask-size=24
- --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
- --root-ca-file=/etc/kubernetes/pki/ca.crt
- --service-account-private-key-file=/etc/kubernetes/pki/sa.key
- --use-service-account-credentials=true
image: k8s.gcr.io/kube-controller-manager:v1.14.2
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 8
httpGet:
host: 127.0.0.1
path: /healthz
port: 10252
scheme: HTTP
initialDelaySeconds: 15
timeoutSeconds: 15
name: kube-controller-manager
resources:
requests:
cpu: 200m
volumeMounts:
- mountPath: /etc/ssl/certs
name: ca-certs
readOnly: true
- mountPath: /etc/ca-certificates
name: etc-ca-certificates
readOnly: true
- mountPath: /etc/pki
name: etc-pki
readOnly: true
- mountPath: /usr/libexec/kubernetes/kubelet-plugins/volume/exec
name: flexvolume-dir
- mountPath: /etc/kubernetes/pki
name: k8s-certs
readOnly: true
- mountPath: /etc/kubernetes/controller-manager.conf
name: kubeconfig
readOnly: true
- mountPath: /usr/local/share/ca-certificates
name: usr-local-share-ca-certificates
readOnly: true
- mountPath: /usr/share/ca-certificates
name: usr-share-ca-certificates
readOnly: true
hostNetwork: true
priorityClassName: system-cluster-critical
volumes:
- hostPath:
path: /etc/ssl/certs
type: DirectoryOrCreate
name: ca-certs
- hostPath:
path: /etc/ca-certificates
type: DirectoryOrCreate
name: etc-ca-certificates
- hostPath:
path: /etc/pki
type: DirectoryOrCreate
name: etc-pki
- hostPath:
path: /usr/libexec/kubernetes/kubelet-plugins/volume/exec
type: DirectoryOrCreate
name: flexvolume-dir
- hostPath:
path: /etc/kubernetes/pki
type: DirectoryOrCreate
name: k8s-certs
- hostPath:
path: /etc/kubernetes/controller-manager.conf
type: FileOrCreate
name: kubeconfig
- hostPath:
path: /usr/local/share/ca-certificates
type: DirectoryOrCreate
name: usr-local-share-ca-certificates
- hostPath:
path: /usr/share/ca-certificates
type: DirectoryOrCreate
name: usr-share-ca-certificates
status: {}