I want to update/inject existing/old environment variables in a pod. How can I do via kubernetes job
This is my daemon set :
apiVersion: apps/v1
kind: DaemonSet
spec:
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: aws-node
template:
metadata:
annotations:
kubectl.kubernetes.io/restartedAt: "2021-08-30T22:02:00+08:00"
creationTimestamp: null
labels:
k8s-app: aws-node
spec:
containers:
- env:
- name: DISABLE_METRICS
value: "false"
- name: ENABLE_POD_ENI
value: "false"
- name: WARM_ENI_TARGET
value: "1"
- name: WARM_IP_TARGET
value: "5"
name: aws-node
initContainers:
- env:
- name: DISABLE_TCP_EARLY_DEMUX
value: "false"
- name: AWS_VPC_K8S_CNI_EXTERNALSNAT
value: "true"
I want to make ENABLE_POD_ENI as true in main container and add WARM_IP_TARGET as 1 in init container
How can I do that via k8s job
I'm trying to run local Airflow instance on my laptop using minikube, deployment.yml file with the following command: kubectl apply -f ./deployment.yml.
After slightly tweaking this file I was able to end up with all three pods: postgres, webserver, scheduler running fine.
The result of the kubectl get pods
The content of the file:
---
# Source: airflow/templates/rbac/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: release-name-airflow
namespace: default
labels:
app.kubernetes.io/name: airflow
helm.sh/chart: airflow-12.0.5
app.kubernetes.io/instance: release-name
app.kubernetes.io/managed-by: Helm
automountServiceAccountToken: true
---
# Source: airflow/charts/postgresql/templates/secrets.yaml
apiVersion: v1
kind: Secret
metadata:
name: release-name-postgresql
namespace: default
labels:
app.kubernetes.io/name: postgresql
helm.sh/chart: postgresql-11.0.6
app.kubernetes.io/instance: release-name
app.kubernetes.io/managed-by: Helm
type: Opaque
data:
password: "**************"
# We don't auto-generate LDAP password when it's not provided as we do for other passwords
---
# Source: airflow/templates/config/secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: release-name-airflow
namespace: default
labels:
app.kubernetes.io/name: airflow
helm.sh/chart: airflow-12.0.5
app.kubernetes.io/instance: release-name
app.kubernetes.io/managed-by: Helm
type: Opaque
data:
airflow-password: "*************"
# Airflow keys must be base64-encoded, hence we need to pipe to 'b64enc' twice
# The auto-generation mechanism available at "common.secrets.passwords.manage" isn't compatible with encoding twice
# Therefore, we can only use this function if the secret already exists
airflow-fernet-key: "TldwdU0zRklTREZ0VDFkamVWUjFaMlozWTFKdWNFNUxTRXRxVm5Oa1p6az0="
airflow-secret-key: "VldWaWQySkhSVUZQZDNWQlltbG1UVzUzVkdwWmVVTkxPR1ZCZWpoQ05tUT0="
---
apiVersion: v1
kind: ConfigMap
metadata:
name: airflow-dependencies
namespace: "default"
data:
requirements.txt: |-
apache-airflow==2.2.3
pytest==6.2.4
python-slugify<5.0
funcy==1.16
apache-airflow-providers-mongo
apache-airflow-providers-postgres
apache-airflow-providers-slack
apache-airflow-providers-amazon
airflow_clickhouse_plugin
apache-airflow-providers-sftp
surveymonkey-python
---
# Source: airflow/templates/rbac/role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: release-name-airflow
namespace: default
labels:
app.kubernetes.io/name: airflow
helm.sh/chart: airflow-12.0.5
app.kubernetes.io/instance: release-name
app.kubernetes.io/managed-by: Helm
rules:
- apiGroups:
- ""
resources:
- "pods"
verbs:
- "create"
- "list"
- "get"
- "watch"
- "delete"
- "patch"
- apiGroups:
- ""
resources:
- "pods/log"
verbs:
- "get"
- apiGroups:
- ""
resources:
- "pods/exec"
verbs:
- "create"
- "get"
---
# Source: airflow/templates/rbac/rolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: release-name-airflow
namespace: default
labels:
app.kubernetes.io/name: airflow
helm.sh/chart: airflow-12.0.5
app.kubernetes.io/instance: release-name
app.kubernetes.io/managed-by: Helm
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: release-name-airflow
subjects:
- kind: ServiceAccount
name: release-name-airflow
namespace: default
---
# Source: airflow/charts/postgresql/templates/primary/svc-headless.yaml
apiVersion: v1
kind: Service
metadata:
name: release-name-postgresql-hl
namespace: default
labels:
app.kubernetes.io/name: postgresql
helm.sh/chart: postgresql-11.0.6
app.kubernetes.io/instance: release-name
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: primary
# Use this annotation in addition to the actual publishNotReadyAddresses
# field below because the annotation will stop being respected soon but the
# field is broken in some versions of Kubernetes:
# https://github.com/kubernetes/kubernetes/issues/58662
service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
spec:
type: ClusterIP
clusterIP: None
# We want all pods in the StatefulSet to have their addresses published for
# the sake of the other Postgresql pods even before they're ready, since they
# have to be able to talk to each other in order to become ready.
publishNotReadyAddresses: true
ports:
- name: tcp-postgresql
port: 5432
targetPort: tcp-postgresql
selector:
app.kubernetes.io/name: postgresql
app.kubernetes.io/instance: release-name
app.kubernetes.io/component: primary
---
# Source: airflow/charts/postgresql/templates/primary/svc.yaml
apiVersion: v1
kind: Service
metadata:
name: release-name-postgresql
namespace: default
labels:
app.kubernetes.io/name: postgresql
helm.sh/chart: postgresql-11.0.6
app.kubernetes.io/instance: release-name
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: primary
annotations:
spec:
type: ClusterIP
ports:
- name: tcp-postgresql
port: 5432
targetPort: tcp-postgresql
nodePort: null
selector:
app.kubernetes.io/name: postgresql
app.kubernetes.io/instance: release-name
app.kubernetes.io/component: primary
---
# Source: airflow/templates/web/service.yaml
apiVersion: v1
kind: Service
metadata:
name: release-name-airflow
namespace: default
labels:
app.kubernetes.io/name: airflow
helm.sh/chart: airflow-12.0.5
app.kubernetes.io/instance: release-name
app.kubernetes.io/managed-by: Helm
spec:
type: NodePort
ports:
- name: http
port: 8080
nodePort: 30303
selector:
app.kubernetes.io/name: airflow
app.kubernetes.io/instance: release-name
app.kubernetes.io/component: web
---
# Source: airflow/templates/scheduler/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: release-name-airflow-scheduler
namespace: default
labels:
app.kubernetes.io/name: airflow
helm.sh/chart: airflow-12.0.5
app.kubernetes.io/instance: release-name
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: scheduler
spec:
selector:
matchLabels:
app.kubernetes.io/name: airflow
app.kubernetes.io/instance: release-name
app.kubernetes.io/component: scheduler
replicas: 1
strategy:
rollingUpdate: {}
type: RollingUpdate
template:
metadata:
labels:
app.kubernetes.io/name: airflow
helm.sh/chart: airflow-12.0.5
app.kubernetes.io/instance: release-name
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: scheduler
annotations:
checksum/configmap: 01ba4719c80b6fe911b091a7c05124b64eeece964e09c058ef8f9805daca546b
spec:
affinity:
podAffinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
labelSelector:
matchLabels:
app.kubernetes.io/name: airflow
app.kubernetes.io/instance: release-name
app.kubernetes.io/component: scheduler
namespaces:
- "default"
topologyKey: kubernetes.io/hostname
weight: 1
nodeAffinity:
serviceAccountName: release-name-airflow
securityContext:
fsGroup: 1001
initContainers:
containers:
- name: airflow-scheduler
image: "docker.io/bitnami/airflow-scheduler:2.2.3-debian-10-r57"
imagePullPolicy: "IfNotPresent"
securityContext:
runAsNonRoot: true
runAsUser: 1001
env:
- name: AIRFLOW_FERNET_KEY
valueFrom:
secretKeyRef:
name: release-name-airflow
key: airflow-fernet-key
- name: AIRFLOW_SECRET_KEY
valueFrom:
secretKeyRef:
name: release-name-airflow
key: airflow-secret-key
- name: AIRFLOW_LOAD_EXAMPLES
value: "no"
- name: AIRFLOW_DATABASE_NAME
value: "bitnami_airflow"
- name: AIRFLOW_DATABASE_USERNAME
value: "bn_airflow"
- name: AIRFLOW_DATABASE_PASSWORD
valueFrom:
secretKeyRef:
name: release-name-postgresql
key: password
- name: AIRFLOW_DATABASE_HOST
value: "release-name-postgresql"
- name: AIRFLOW_DATABASE_PORT_NUMBER
value: "5432"
- name: AIRFLOW_EXECUTOR
value: LocalExecutor
- name: AIRFLOW_WEBSERVER_HOST
value: release-name-airflow
- name: AIRFLOW_WEBSERVER_PORT_NUMBER
value: "8080"
- name: AIRFLOW__CORE__DAGS_FOLDER
value: /opt/bitnami/airflow/dags
- name: AIRFLOW__CORE__ENABLE_XCOM_PICKLING
value: "True"
- name: AIRFLOW__CORE__DONOT_PICKLE
value: "False"
resources:
limits: {}
requests: {}
volumeMounts:
- mountPath: /bitnami/python/requirements.txt
name: requirements
subPath: requirements.txt
- mountPath: /opt/bitnami/airflow/dags/src
name: airflow-dags
volumes:
- name: requirements
configMap:
name: airflow-dependencies
- name: airflow-dags
hostPath:
# directory location on host
path: /Users/admin/Desktop/FXC_Airflow/dags/src
# this field is optional
type: Directory
---
# Source: airflow/templates/web/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: release-name-airflow-web
namespace: default
labels:
app.kubernetes.io/name: airflow
helm.sh/chart: airflow-12.0.5
app.kubernetes.io/instance: release-name
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: web
spec:
selector:
matchLabels:
app.kubernetes.io/name: airflow
app.kubernetes.io/instance: release-name
app.kubernetes.io/component: web
replicas: 1
strategy:
rollingUpdate: {}
type: RollingUpdate
template:
metadata:
labels:
app.kubernetes.io/name: airflow
helm.sh/chart: airflow-12.0.5
app.kubernetes.io/instance: release-name
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: web
annotations:
checksum/configmap: 01ba4719c80b6fe911b091a7c05124b64eeece964e09c058ef8f9805daca546b
spec:
affinity:
podAffinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
labelSelector:
matchLabels:
app.kubernetes.io/name: airflow
app.kubernetes.io/instance: release-name
app.kubernetes.io/component: web
namespaces:
- "default"
topologyKey: kubernetes.io/hostname
weight: 1
nodeAffinity:
serviceAccountName: release-name-airflow
securityContext:
fsGroup: 1001
initContainers:
containers:
- name: airflow-web
image: docker.io/bitnami/airflow:2.2.3-debian-10-r62
imagePullPolicy: "IfNotPresent"
securityContext:
runAsNonRoot: true
runAsUser: 1001
env:
- name: AIRFLOW_FERNET_KEY
valueFrom:
secretKeyRef:
name: release-name-airflow
key: airflow-fernet-key
- name: AIRFLOW_SECRET_KEY
valueFrom:
secretKeyRef:
name: release-name-airflow
key: airflow-secret-key
- name: AIRFLOW_LOAD_EXAMPLES
value: "no"
- name: AIRFLOW_DATABASE_NAME
value: "bitnami_airflow"
- name: AIRFLOW_DATABASE_USERNAME
value: "bn_airflow"
- name: AIRFLOW_DATABASE_PASSWORD
valueFrom:
secretKeyRef:
name: release-name-postgresql
key: password
- name: AIRFLOW_DATABASE_HOST
value: "release-name-postgresql"
- name: AIRFLOW_DATABASE_PORT_NUMBER
value: "5432"
- name: AIRFLOW_EXECUTOR
value: LocalExecutor
- name: AIRFLOW_WEBSERVER_HOST
value: "0.0.0.0"
- name: AIRFLOW_WEBSERVER_PORT_NUMBER
value: "8080"
- name: AIRFLOW_USERNAME
value: airflow
- name: AIRFLOW_PASSWORD
valueFrom:
secretKeyRef:
name: release-name-airflow
key: airflow-password
- name: AIRFLOW_BASE_URL
value: "http://127.0.0.1:8080"
- name: AIRFLOW_LDAP_ENABLE
value: "no"
- name: AIRFLOW__CORE__DAGS_FOLDER
value: /opt/bitnami/airflow/dags
- name: AIRFLOW__CORE__ENABLE_XCOM_PICKLING
value: "True"
- name: AIRFLOW__CORE__DONOT_PICKLE
value: "False"
ports:
- name: http
containerPort: 8080
livenessProbe:
failureThreshold: 6
initialDelaySeconds: 180
periodSeconds: 20
successThreshold: 1
timeoutSeconds: 5
tcpSocket:
port: http
readinessProbe:
failureThreshold: 6
initialDelaySeconds: 30
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
tcpSocket:
port: http
resources:
limits:
cpu: "2"
memory: 4Gi
requests: {}
volumeMounts:
- mountPath: /bitnami/python/requirements.txt
name: requirements
subPath: requirements.txt
- mountPath: /opt/bitnami/airflow/dags/src
name: airflow-dags
volumes:
- name: requirements
configMap:
name: airflow-dependencies
- name: airflow-dags
hostPath:
# directory location on host
path: /Users/admin/Desktop/FXC_Airflow/dags/src
# this field is optional
type: Directory
---
# Source: airflow/charts/postgresql/templates/primary/statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: release-name-postgresql
namespace: default
labels:
app.kubernetes.io/name: postgresql
helm.sh/chart: postgresql-11.0.6
app.kubernetes.io/instance: release-name
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: primary
annotations:
spec:
replicas: 1
serviceName: release-name-postgresql-hl
updateStrategy:
rollingUpdate: {}
type: RollingUpdate
selector:
matchLabels:
app.kubernetes.io/name: postgresql
app.kubernetes.io/instance: release-name
app.kubernetes.io/component: primary
template:
metadata:
name: release-name-postgresql
labels:
app.kubernetes.io/name: postgresql
helm.sh/chart: postgresql-11.0.6
app.kubernetes.io/instance: release-name
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: primary
annotations:
spec:
serviceAccountName: default
affinity:
podAffinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
labelSelector:
matchLabels:
app.kubernetes.io/name: postgresql
app.kubernetes.io/instance: release-name
app.kubernetes.io/component: primary
namespaces:
- "default"
topologyKey: kubernetes.io/hostname
weight: 1
nodeAffinity:
securityContext:
fsGroup: 1001
initContainers:
containers:
- name: postgresql
image: docker.io/bitnami/postgresql:14.1.0-debian-10-r80
imagePullPolicy: "IfNotPresent"
securityContext:
runAsUser: 1001
env:
- name: BITNAMI_DEBUG
value: "false"
- name: POSTGRESQL_PORT_NUMBER
value: "5432"
- name: POSTGRESQL_VOLUME_DIR
value: "/bitnami/postgresql"
- name: PGDATA
value: "/bitnami/postgresql/data"
# Authentication
- name: POSTGRES_USER
value: "bn_airflow"
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: release-name-postgresql
key: password
- name: POSTGRES_DB
value: "bitnami_airflow"
# Replication
# Initdb
# Standby
# LDAP
- name: POSTGRESQL_ENABLE_LDAP
value: "no"
# TLS
- name: POSTGRESQL_ENABLE_TLS
value: "no"
# Audit
- name: POSTGRESQL_LOG_HOSTNAME
value: "false"
- name: POSTGRESQL_LOG_CONNECTIONS
value: "false"
- name: POSTGRESQL_LOG_DISCONNECTIONS
value: "false"
- name: POSTGRESQL_PGAUDIT_LOG_CATALOG
value: "off"
# Others
- name: POSTGRESQL_CLIENT_MIN_MESSAGES
value: "error"
- name: POSTGRESQL_SHARED_PRELOAD_LIBRARIES
value: "pgaudit"
ports:
- name: tcp-postgresql
containerPort: 5432
livenessProbe:
failureThreshold: 6
initialDelaySeconds: 30
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
exec:
command:
- /bin/sh
- -c
- exec pg_isready -U "bn_airflow" -d "dbname=bitnami_airflow" -h 127.0.0.1 -p 5432
readinessProbe:
failureThreshold: 6
initialDelaySeconds: 5
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
exec:
command:
- /bin/sh
- -c
- -e
- |
exec pg_isready -U "bn_airflow" -d "dbname=bitnami_airflow" -h 127.0.0.1 -p 5432
[ -f /opt/bitnami/postgresql/tmp/.initialized ] || [ -f /bitnami/postgresql/.initialized ]
resources:
limits: {}
requests:
cpu: 250m
memory: 256Mi
volumeMounts:
- name: dshm
mountPath: /dev/shm
- name: data
mountPath: /bitnami/postgresql
volumes:
- name: dshm
emptyDir:
medium: Memory
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: "8Gi"
The idea is that after successful deployment I would be able to access webserver UI through the localhost:30303, but can't for some reason. It feels like there should be a minor change to fix it...
For now what I've tried is to connect to the webserver pod: kubectl exec -it <webserver pod name> -- /bin/bash and run two commands airflow db init and airflow web server -p 8080.
Here my configmap:
apiVersion: v1
kind: ConfigMap
metadata:
name: chart-1591249502-zeppelin
namespace: ra-iot-dev
labels:
helm.sh/chart: zeppelin-0.1.0
app.kubernetes.io/name: zeppelin
app.kubernetes.io/instance: chart-1591249502
app.kubernetes.io/version: "0.9.0"
app.kubernetes.io/managed-by: Helm
data:
log4j.properties: |-
log4j.rootLogger = INFO, dailyfile
log4j.appender.stdout = org.apache.log4j.ConsoleAppender
log4j.appender.stdout.layout = org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=%5p [%d] ({%t} %F[%M]:%L) - %m%n
log4j.appender.dailyfile.DatePattern=.yyyy-MM-dd
log4j.appender.dailyfile.DEBUG = INFO
log4j.appender.dailyfile = org.apache.log4j.DailyRollingFileAppender
log4j.appender.dailyfile.File = ${zeppelin.log.file}
log4j.appender.dailyfile.layout = org.apache.log4j.PatternLayout
log4j.appender.dailyfile.layout.ConversionPattern=%5p [%d] ({%t} %F[%M]:%L) - %m%n
log4j.logger.org.apache.zeppelin.python=DEBUG
log4j.logger.org.apache.zeppelin.spark=DEBUG
I'm trying to mount this file into /zeppelin/conf/log4j.properties pod directory file.
Here my deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: chart-1591249502-zeppelin
labels:
helm.sh/chart: zeppelin-0.1.0
app.kubernetes.io/name: zeppelin
app.kubernetes.io/instance: chart-1591249502
app.kubernetes.io/version: "0.9.0"
app.kubernetes.io/managed-by: Helm
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: zeppelin
app.kubernetes.io/instance: chart-1591249502
template:
metadata:
labels:
app.kubernetes.io/name: zeppelin
app.kubernetes.io/instance: chart-1591249502
spec:
serviceAccountName: chart-1591249502-zeppelin
securityContext:
{}
containers:
- name: zeppelin
securityContext:
{}
image: "apache/zeppelin:0.9.0"
imagePullPolicy: IfNotPresent
ports:
- name: http
containerPort: 8080
protocol: TCP
livenessProbe:
httpGet:
path: /
port: http
readinessProbe:
httpGet:
path: /
port: http
resources:
{}
env:
- name: ZEPPELIN_PORT
value: "8080"
- name: ZEPPELIN_K8S_CONTAINER_IMAGE
value: apache/zeppelin:0.9.0
- name: ZEPPELIN_RUN_MODE
value: local
volumeMounts:
- name: log4j-properties-volume
mountPath: /zeppelin/conf/log4j.properties
volumes:
- name: log4j-properties-volume
configMap:
name: chart-1591249502-zeppelin
items:
- key: log4j.properties
path: keys
I'm getting this error event in kubernetes:
Error: failed to start container "zeppelin": Error response from daemon: oci runtime error: container_linux.go:247: starting container process caused "process_linux.go:364: container init caused \"rootfs_linux.go:54: mounting \\"/var/lib/origin/openshift.local.volumes/pods/63ac209e-a626-11ea-9e39-0050569f5f65/volumes/kubernetes.io~configmap/log4j-properties-volume\\" to rootfs \\"/var/lib/docker/overlay2/33f3199e46111afdcd64d21c58b010427c27761b02473967600fb95ab6d92e21/merged\\" at \\"/var/lib/docker/overlay2/33f3199e46111afdcd64d21c58b010427c27761b02473967600fb95ab6d92e21/merged/zeppelin/conf/log4j.properties\\" caused \\"not a directory\\"\"" : Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type
Take in mind, that I only want to replace an existing file. I mean, into /zeppelin/conf/ directory there are several files. I only want to replace /zeppelin/conf/log4j.properties.
Any ideas?
From logs I saw that you are working on OpenShift, however I was able to do it on GKE.
I've deployed pure zeppelin deployment form your example.
zeppelin#chart-1591249502-zeppelin-557d895cd5-v46dt:~/conf$ cat log4j.properties
#
# Licensed to the Apache Software Foundation (ASF) under one or more
...
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
...
# limitations under the License.
#
log4j.rootLogger = INFO, stdout
log4j.appender.stdout = org.apache.log4j.ConsoleAppender
log4j.appender.stdout.layout = org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=%5p [%d] ({%t} %F[%M]:%L) - %m%n
zeppelin#chart-1591249502-zeppelin-557d895cd5-v46dt:~/conf$
If you want to repleace one specific file, you need to use subPath. There is also article with another example which can be found here.
Issue 1. ConfigMap belongs to namespace
Your deployment did not contains any namespace so it was deployed in default namespace. ConfigMap included namespace: ra-iot-dev.
$ kubectl api-resources
NAME SHORTNAMES APIGROUP NAMESPACED KIND
...
configmaps cm true ConfigMap
...
If you will keep this namespace, you will probably get error like:
MountVolume.SetUp failed for volume "log4j-properties-volume" : configmap "chart-1591249502-zeppelin" not found
Issue 2. subPath to replace file
Ive changed one part in deployment (added subPath)
volumeMounts:
- name: log4j-properties-volume
mountPath: /zeppelin/conf/log4j.properties
subPath: log4j.properties
volumes:
- name: log4j-properties-volume
configMap:
name: chart-1591249502-zeppelin
and another in ConfigMap (removed namespace and set proper names)
apiVersion: v1
kind: ConfigMap
metadata:
name: chart-1591249502-zeppelin
labels:
helm.sh/chart: zeppelin-0.1.0
app.kubernetes.io/name: zeppelin
app.kubernetes.io/instance: chart-1591249502
app.kubernetes.io/version: "0.9.0"
app.kubernetes.io/managed-by: Helm
data:
log4j.properties: |-
...
After that output of the file looks like:
$ kubectl exec -ti chart-1591249502-zeppelin-64495dcfc8-ccddr -- /bin/bash
zeppelin#chart-1591249502-zeppelin-64495dcfc8-ccddr:~$ cd conf
zeppelin#chart-1591249502-zeppelin-64495dcfc8-ccddr:~/conf$ ls
configuration.xsl log4j.properties log4j_yarn_cluster.properties zeppelin-env.cmd.template zeppelin-site.xml.template
interpreter-list log4j.properties2 shiro.ini.template zeppelin-env.sh.template
zeppelin#chart-1591249502-zeppelin-64495dcfc8-ccddr:~/conf$ cat log4j.properties
log4j.rootLogger = INFO, dailyfile
log4j.appender.stdout = org.apache.log4j.ConsoleAppender
log4j.appender.stdout.layout = org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=%5p [%d] ({%t} %F[%M]:%L) - %m%n
log4j.appender.dailyfile.DatePattern=.yyyy-MM-dd
log4j.appender.dailyfile.DEBUG = INFO
log4j.appender.dailyfile = org.apache.log4j.DailyRollingFileAppender
log4j.appender.dailyfile.File = ${zeppelin.log.file}
log4j.appender.dailyfile.layout = org.apache.log4j.PatternLayout
log4j.appender.dailyfile.layout.ConversionPattern=%5p [%d] ({%t} %F[%M]:%L) - %m%n
log4j.logger.org.apache.zeppelin.python=DEBUG
log4j.logger.org.apache.zeppelin.spark=DEBUGzeppelin#chart-1591249502-zeppelin-64495dcfc8-ccddr:~/conf$
---
apiVersion: v1
kind: ConfigMap
metadata:
name: application-config-test
namespace: ***
labels:
app: test
environment: ***
tier: backend
data:
application.properties: |-
ulff.kafka.configuration.acks=0
ulff.kafka.configuration[bootstrap.servers]=IP
ulff.kafka.topic=test-topic
ulff.enabled=true
logging.level.com.anurag.gigthree.ulff.kafka=DEBUG
management.port=9080
management.security.enabled=false
management.endpoints.web.exposure.include= "metrics,health,threaddump,prometheus,heapdump"
management.endpoint.prometheus.enabled=true
management.metrics.export.prometheus.enabled=true
## For apigee PROD
apigee.url=****
### Secrets in Kubenetes accessed by ENV variables
apigee.clientID=apigeeClientId
apigee.clientsecret=apigeeClientSecret
spring.mvc.throw-exception-if-no-handler-found=true
#For OAuth details for apigee
oauth2.config.clientId=${apigee.clientID}
oauth2.config.clientSecret=${apigee.clientsecret}
oauth2.config.authenticationScheme=form
oauth2.config.scopes[0]=test_INTEGRATION_ALL
oauth2.config.accessTokenUri=${apigee.url}/oauth2/token
oauth2.config.requestTimeout=55000
oauth2.restTemplateBuilder.enabled=true
#spring jackson properties
spring.jackson.default-property-inclusion=always
spring.jackson.generator.ignore-unknown=true
spring.jackson.mapper.accept-case-insensitive-properties=true
spring.jackson.deserialization.fail-on-unknown-properties=false
# service urls for apply profile
services.apigeeIntegrationAPI.doProfileChangeUrl=${apigee.url}/v1/testintegration
services.apigeeIntegrationAPI.modifyServiceOfSubscriberUrl=${apigee.url}/v1/testintegration/subscribers
# service urls for retrieve profile
services.apigeeIntegrationAPI.getProfileUrl=${apigee.url}/v1
services.apigeeIntegrationAPI.readKeyUrl=${apigee.url}/v1/testintegration
test.acfStatusConfig[1].country-prefix=
test.acfStatusConfig[1].country-code=
test.acfStatusConfig[1].profile-name=
test.acfStatusConfig[1].adult=ON
test.acfStatusConfig[1].hvw=ON
test.acfStatusConfig[1].ms=ON
test.acfStatusConfig[1].dc=ON
test.acfStatusConfig[1].at=OFF
test.acfStatusConfig[1].gambling=
test.acfStatusConfig[1].dating=OFF
test.acfStatusConfig[1].sex=OFF
test.acfStatusConfig[1].sn=OFF
logging.pattern.level=%X{ulff.transaction-id:-} -%5p
logging.config=/app/config/log4j2.yml
log4j2.yml: |-
Configutation:
name: test-ms
packages :
Appenders:
Console:
- name: sysout
target: SYSTEM_OUT
PatternLayout:
pattern: "%d{HH:mm:ss.SSS} %-5p [%-7t] %F:%L - %m%n"
- name: syserr
target: SYSTEM_ERR
PatternLayout:
pattern: "%d{HH:mm:ss.SSS} %-5p [%-7t] %F:%L - %m%n"
Filters:
ThresholdFilter :
level: "WARN"
onMatch: "ACCEPT"
Kafka:
name : kafkaAppender
topic: af.prod.ms.test.tomcat
JSONLayout:
complete: "false"
compact: "false"
eventEol: "true"
includeStacktrace: "true"
properties: "true"
Property:
name: "bootstrap.servers"
value: ""
Loggers:
Root:
level: INFO
AppenderRef:
- ref: sysout
- ref: syserr
#### test 1 test 2 Separate kafka log from application log
Logger:
- name: com.anurag
level: INFO
AppenderRef:
- ref: kafkaAppender
- name: org.springframework
level: INFO
AppenderRef:
- ref: kafkaAppender
I want to create a custom 403 error page.
Currently I already have an Ingress created and in the annotations I have something like this:
"nginx.ingress.kubernetes.io/whitelist-source-range": "100.01.128.0/20,88.100.01.01"
So any attempt to access my web app outside that IP range receives a 403 error.
In order to create a custom page I tried adding the following annotations:
"nginx.ingress.kubernetes.io/custom-http-errors": "403",
"nginx.ingress.kubernetes.io/default-backend": "default-http-backend"
where default-http-backend is the name of an app already deployed.
the ingress has this:
{
"kind": "Ingress",
"apiVersion": "extensions/v1beta1",
"metadata": {
"name": "my-app-ingress",
"namespace": "my-app-test",
"selfLink": "/apis/extensions/v1beta1/namespaces/my-app-test/ingresses/my-app-ingress",
"uid": "8f31f2b4-428d-11ea-b15a-ee0dcf00d5a8",
"resourceVersion": "129105581",
"generation": 3,
"creationTimestamp": "2020-01-29T11:50:34Z",
"annotations": {
"kubernetes.io/ingress.class": "nginx",
"nginx.ingress.kubernetes.io/custom-http-errors": "403",
"nginx.ingress.kubernetes.io/default-backend": "default-http-backend",
"nginx.ingress.kubernetes.io/rewrite-target": "/",
"nginx.ingress.kubernetes.io/whitelist-source-range": "100.01.128.0/20,90.108.01.012"
}
},
"spec": {
"tls": [
{
"hosts": [
"my-app-test.retail-azure.js-devops.co.uk"
],
"secretName": "ssl-secret"
}
],
"rules": [
{
"host": "my-app-test.retail-azure.js-devops.co.uk",
"http": {
"paths": [
{
"path": "/api",
"backend": {
"serviceName": "my-app-backend",
"servicePort": 80
}
},
{
"path": "/",
"backend": {
"serviceName": "my-app-frontend",
"servicePort": 80
}
}
]
}
}
]
},
"status": {
"loadBalancer": {
"ingress": [
{}
]
}
}
}
Yet I always get the default 403.
What am I missing?
I've reproduced your scenario and that worked for me.
I will try to guide you in steps I've followed.
Cloud provider: GKE
Kubernetes Version: v1.15.3
Namespace: default
I'm using 2 deployments of 2 images with a service for each one.
Service 1: default-http-backend - with nginx image, it will be our default backend.
Service 2: custom-http-backend - with inanimate/echo-server image, this service will be displayed if the request become from a whitelisted ip.
Ingress: Nginx ingress with annotations.
Expected behavior: The ingress will be configured to use default-backend, custom-http-errors and whitelist-source-range annotations. If the request was made from a whitelisted ip the ingress will redirect to custom-http-backend, if not it will be redirect to default-http-backend.
Deployment 1: default-http-backend
Create a file default-http-backend.yaml with this content:
apiVersion: apps/v1
kind: Deployment
metadata:
name: default-http-backend
spec:
selector:
matchLabels:
app: default-http-backend
template:
metadata:
labels:
app: default-http-backend
spec:
containers:
- name: default-http-backend
image: nginx
ports:
- name: http
containerPort: 80
imagePullPolicy: IfNotPresent
---
apiVersion: v1
kind: Service
metadata:
name: default-http-backend
spec:
selector:
app: default-http-backend
ports:
- protocol: TCP
port: 80
targetPort: 80
Apply the yaml file: k apply -f default-http-backend.yaml
Deployment 2: custom-http-backend
Create a file custom-http-backend.yaml with this content:
apiVersion: apps/v1
kind: Deployment
metadata:
name: custom-http-backend
spec:
selector:
matchLabels:
app: custom-http-backend
template:
metadata:
labels:
app: custom-http-backend
spec:
containers:
- name: custom-http-backend
image: inanimate/echo-server
ports:
- name: http
containerPort: 8080
imagePullPolicy: IfNotPresent
---
apiVersion: v1
kind: Service
metadata:
name: custom-http-backend
spec:
selector:
app: custom-http-backend
ports:
- protocol: TCP
port: 80
targetPort: 8080
Apply the yaml file: k apply -f custom-http-backend.yaml
Check if services is up and running
I'm using the alias k for kubectl
➜ ~ k get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
custom-http-backend ClusterIP 10.125.5.227 <none> 80/TCP 73s
default-http-backend ClusterIP 10.125.9.218 <none> 80/TCP 5m41s
...
➜ ~ k get pods
NAME READY STATUS RESTARTS AGE
custom-http-backend-67844fb65d-k2mwl 1/1 Running 0 2m10s
default-http-backend-5485f569bd-fkd6f 1/1 Running 0 6m39s
...
You could test the service using port-forward:
default-http-backend
k port-forward svc/default-http-backend 8080:80
Try to access http://localhost:8080 in your browse to see the nginx default page.
custom-http-backend
k port-forward svc/custom-http-backend 8080:80
Try to access http://localhost:8080 in your browse to see the custom page provided by the echo-server image.
Ingress configuration
At this point we have both services up and running, we need to install and configure the nginx ingress. You can follow the official documentation, this will not covered here.
After installed let's deploy the ingress, based in the code you posted i did some modifications: tls removed, added other domain and removed the path /api for tests purposes only and add my home ip to whitelist.
Create a file my-app-ingress.yaml with the content:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-app-ingress
namespace: default
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: "/"
nginx.ingress.kubernetes.io/custom-http-errors: '403'
nginx.ingress.kubernetes.io/default-backend: default-http-backend
nginx.ingress.kubernetes.io/whitelist-source-range: 207.34.xxx.xx/32
spec:
rules:
- host: myapp.rabello.me
http:
paths:
- path: "/"
backend:
serviceName: custom-http-backend
servicePort: 80
Apply the spec: k apply -f my-app-ingress.yaml
Check the ingress with the command:
➜ ~ k get ing
NAME HOSTS ADDRESS PORTS AGE
my-app-ingress myapp.rabello.me 146.148.xx.xxx 80 36m
That's all!
If I test from home with my whitelisted ip, the custom page is showed, but if i try to access using my cellphone in 4G network, the nginx default page is displayed.
Note I'm using ingress and services in the same namespace, if you need work with different namespace you need to use ExternalName.
I hope that helps!
References:
kubernetes deployments
kubernetes service
nginx ingress
nginx annotations
I want to create a custom 403 error page. Currently I already have an Ingress created and in the annotations.
So any attempt to access my web app outside that IP range receives a 403 error.
In order to create a custom page I tried adding the following annotations:
kind: Ingress
metadata:
name: my-app-ingress
namespace: default
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: "/"
nginx.ingress.kubernetes.io/custom-http-errors: '403'
nginx.ingress.kubernetes.io/default-backend: default-http-backend
nginx.ingress.kubernetes.io/whitelist-source-range: 125.10.156.36/32
spec:
rules:
- host: venkat.dev.vboffice.com
http:
paths:
- path: "/"
backend:
serviceName: custom-http-backend
servicePort: 80
where default-http-backend is the name of an app already deployed with default nginx page.
If I test from home with my whitelisted ip, the custom page is showed, but if i try to access using my cellphone in 4G network, it will display default backend 404
i need to add any nginx config change custom-http-backend pod????
Deployment 1:default-http-backend
apiVersion: apps/v1
kind: Deployment
metadata:
name: default-http-backend
spec:
selector:
matchLabels:
app: default-http-backend
template:
metadata:
labels:
app: default-http-backend
spec:
containers:
- name: default-http-backend
image: nginx
ports:
- name: http
containerPort: 80
imagePullPolicy: IfNotPresent
---
apiVersion: v1
kind: Service
metadata:
name: default-http-backend
spec:
selector:
app: default-http-backend
ports:
- protocol: TCP
port: 80
targetPort: 80
Deployment 2: custom-http-backend
apiVersion: apps/v1
kind: Deployment
metadata:
name: custom-http-backend
spec:
selector:
matchLabels:
app: custom-http-backend
template:
metadata:
labels:
app: custom-http-backend
spec:
containers:
- name: custom-http-backend
image: inanimate/echo-server
ports:
- name: http
containerPort: 8080
imagePullPolicy: IfNotPresent
---
apiVersion: v1
kind: Service
metadata:
name: custom-http-backend
spec:
selector:
app: custom-http-backend
ports:
- protocol: TCP
port: 80
targetPort: 8080
One can customize the 403 error page for ingress-nginx (/etc/nginx/template), just by editing the nginx.tmpl file. Then mounting it to ingress nginx controller deployment.Below is the part of the nginx.tmpl need to be edited:
{{/* Build server redirects (from/to www) */}}
{{ range $redirect := .RedirectServers }}
## start server {{ $redirect.From }}
server {
server_name {{ $redirect.From }};
{{ buildHTTPListener $all $redirect.From }}
{{ buildHTTPSListener $all $redirect.From }}
ssl_certificate_by_lua_block {
certificate.call()
}
error_page 403 /403.html;
{{ if gt (len $cfg.BlockUserAgents) 0 }}
if ($block_ua) {
return 403;
}
{{ end }}
{{ if gt (len $cfg.BlockReferers) 0 }}
if ($block_ref) {
return 403;
}
{{ end }}
location = /403.html {
root /usr/local/nginx/html/;
internal;
}
set_by_lua_block $redirect_to {
local request_uri = ngx.var.request_uri
if string.sub(request_uri, -1) == "/" then
request_uri = string.sub(request_uri, 1, -2)
end
{{ if ne $all.ListenPorts.HTTPS 443 }}
{{ $redirect_port := (printf ":%v" $all.ListenPorts.HTTPS) }}
return string.format("%s://%s%s%s", ngx.var.scheme, "{{ $redirect.To }}", "{{ $redirect_port }}", request_uri)
{{ else }}
return string.format("%s://%s%s", ngx.var.scheme, "{{ $redirect.To }}", request_uri)
{{ end }}
}
return {{ $all.Cfg.HTTPRedirectCode }} $redirect_to;
}
## end server {{ $redirect.From }}
{{ end }}
{{ range $server := $servers }}
## start server {{ $server.Hostname }}
server {
server_name {{ buildServerName $server.Hostname }} {{range $server.Aliases }}{{ . }} {{ end }};
error_page 403 /403.html;
{{ if gt (len $cfg.BlockUserAgents) 0 }}
if ($block_ua) {
return 403;
}
{{ end }}
{{ if gt (len $cfg.BlockReferers) 0 }}
if ($block_ref) {
return 403;
}
{{ end }}
location = /403.html {
root /usr/local/nginx/html/;
internal;
}
{{ template "SERVER" serverConfig $all $server }}
{{ if not (empty $cfg.ServerSnippet) }}
# Custom code snippet configured in the configuration configmap
{{ $cfg.ServerSnippet }}
{{ end }}
{{ template "CUSTOM_ERRORS" (buildCustomErrorDeps "upstream-default-backend" $cfg.CustomHTTPErrors $all.EnableMetrics) }}
}
## end server {{ $server.Hostname }}
{{ end }}
In the above snippet error_apge 403 /403.html; is declared before we return 403. Then the location of /403.html is defined. The root path is same where one should mount the 403.html page. In this case its /usr/local/nginx/html/.
Below snippet will help you mount the volume with custom pages.
volumes:
- name: custom-errors
configMap:
# Provide the name of the ConfigMap you want to mount.
name: custom-ingress-pages
items:
- key: "404.html"
path: "404.html"
- key: "403.html"
path: "403.html"
- key: "50x.html"
path: "50x.html"
- key: "index.html"
path: "index.html"
This solution doesn't require you to spawn another/extra service or pod of any kind to work.
For more info: https://engineering.zenduty.com/blog/2022/03/02/customizing-error-pages
You need to create and deploy custom default backend which will return a custom error page.Follow the doc to deploy a custom default backend and configure nginx ingress controller by modifying the deployment yaml to use this custom default backend.
The deployment yaml for the custom default backend is here and the source code is here.
I was trying to use io.fabric8 api to create a few resources in kubernetes using a pod-spec.yaml.
Config config = new ConfigBuilder()
.withNamespace("ag")
.withMasterUrl(K8_URL)
.build();
try (final KubernetesClient client = new DefaultKubernetesClient(config)) {
LOGGER.info("Master: " + client.getMasterUrl());
LOGGER.info("Loading File : " + args[0]);
Pod pod = client.pods().load(new FileInputStream(args[0])).get();
LOGGER.info("Pod created with name : " + pod.toString());
} catch (Exception e) {
LOGGER.error(e.getMessage(), e);
}
The above code works if the resource type is of POD. Similarly for other resource type it is working fine.
But if the yaml has multiple resource type like POD and service in the same file, how to use fabric8 Api ?
I was trying to use client.load(new FileInputStream(args[0])).createOrReplace(); but it is crashing with the below exception:
java.lang.NullPointerException
at java.net.URI$Parser.parse(URI.java:3042)
at java.net.URI.<init>(URI.java:588)
at io.fabric8.kubernetes.client.utils.URLUtils.join(URLUtils.java:48)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.getMandatory(BaseOperation.java:208)
at io.fabric8.kubernetes.client.dsl.base.BaseOperation.get(BaseOperation.java:177)
at io.fabric8.kubernetes.client.handlers.PodHandler.reload(PodHandler.java:53)
at io.fabric8.kubernetes.client.handlers.PodHandler.reload(PodHandler.java:32)
at io.fabric8.kubernetes.client.dsl.internal.NamespaceVisitFromServerGetWatchDeleteRecreateWaitApplicableListImpl.createOrReplace(NamespaceVisitFromServerGetWatchDeleteRecreateWaitApplicableListImpl.java:202)
at io.fabric8.kubernetes.client.dsl.internal.NamespaceVisitFromServerGetWatchDeleteRecreateWaitApplicableListImpl.createOrReplace(NamespaceVisitFromServerGetWatchDeleteRecreateWaitApplicableListImpl.java:62)
at com.nokia.k8s.InterpreterLanuch.main(InterpreterLanuch.java:66)
Yaml file used
apiVersion: v1
kind: Pod
metadata:
generateName: zep-ag-pod
annotations:
kubernetes.io/psp: restricted
spark-app-name: Zeppelin-spark-shared-process
namespace: ag
labels:
app: zeppelin
int-app-selector: shell-123
spec:
containers:
- name: ag-csf-zep
image: bcmt-registry:5000/zep-spark2.2:9
imagePullPolicy: IfNotPresent
command: ["/bin/bash"]
args: ["-c","echo Hi && sleep 60 && echo Done"]
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
runAsNonRoot: true
securityContext:
fsGroup: 2000
runAsUser: 1510
serviceAccount: csfzeppelin
serviceAccountName: csfzeppelin
---
apiVersion: v1
kind: Service
metadata:
name: zeppelin-service
namespace: ag
labels:
app: zeppelin
spec:
type: NodePort
ports:
- name: zeppelin-service
port: 30099
protocol: TCP
targetPort: 8080
selector:
app: zeppelin
You don't need to specify resource type whenever loading a file with multiple documents. You simply need to do:
// Load Yaml into Kubernetes resources
List<HasMetadata> result = client.load(new FileInputStream(args[0])).get();
// Apply Kubernetes Resources
client.resourceList(result).inNamespace(namespace).createOrReplace()