airflow.exceptions.AirflowException: Dag could not be found; either it does not exist or it failed to parse - airflow-2.x

I recently upgraded the Airflow from 1.10.11 to 2.2.3 after following the steps given in https://airflow.apache.org/docs/apache-airflow/stable/upgrading-from-1-10/index.html. I first up upgraded to 1.10.15 as suggested which worked fine. But after upgrading to 2.2.3, I'm unable to execute the DAGs from UI as the task is going into queued state. When I check the task pod logs, I see this error:
[2022-02-22 06:46:23,886] {cli_action_loggers.py:105} WARNING - Failed to log action with (sqlite3.OperationalError) no such table: log
[SQL: INSERT INTO log (dttm, dag_id, task_id, event, execution_date, owner, extra) VALUES (?, ?, ?, ?, ?, ?, ?)]
[parameters: ('2022-02-22 06:46:23.880923', 'dag id', 'task id', 'cli_task_run', None, 'airflow', '{"host_name": "pod name", "full_command": "[\'/home/airflow/.local/bin/airflow\', \'tasks\', \ task id\', \'manual__2022-02-22T06:45:47.840912+00:00\', \'--local\', \'--subdir\', \'DAGS_FOLDER/dag_file.py\']"}')]
(Background on this error at: http://sqlalche.me/e/13/e3q8)
[2022-02-22 06:46:23,888] {dagbag.py:500} INFO - Filling up the DagBag from /opt/airflow/dags/repo/xxxxx.py
Traceback (most recent call last):
File "/home/airflow/.local/bin/airflow", line 8, in <module>
sys.exit(main())
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/__main__.py", line 48, in main
args.func(args)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/cli/cli_parser.py", line 48, in command
return func(*args, **kwargs)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/utils/cli.py", line 92, in wrapper
return f(*args, **kwargs)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/cli/commands/task_command.py", line 282, in task_run
dag = get_dag(args.subdir, args.dag_id)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/utils/cli.py", line 193, in get_dag
f"Dag {dag_id!r} could not be found; either it does not exist or it failed to parse."
airflow.exceptions.AirflowException: Dag 'xxxxx' could not be found; either it does not exist or it failed to parse
I did try exec into the webserver and scheduler using "kubectl exec -it airflow-dev-webserver-6c5755d5dd-262wd -n dev --container webserver -- /bin/sh". I could see all the dags under /opt/airflow/dags/repo/. Even in the error it says Filling up the DagBag from /opt/airflow/dags/repo/ but couldn't understand what's making the task execution to go into queued state.

I figured out the issue using below steps:
I triggered a DAG after which I could see a task pod going into error state. So I did "kubectl logs {pod_name} git-sync" to check whether the DAGs are being copied in the first place or not. Then I found this below error:
image
Then I realized that it is the problem with permissions for writing the DAGs to the DAGs folder. For this I tried changing the "readOnly: false" under "volumeMounts" section.
image
That's it!!! It worked. Below worked finally:
Pod Template File:
apiVersion: v1
kind: Pod
metadata:
labels:
component: worker
release: airflow-dev
tier: airflow
spec:
containers:
- args: []
command: []
env:
- name: AIRFLOW__KUBERNETES__WORKER_CONTAINER_REPOSITORY
value: ECR repo link
- name: AIRFLOW__SMTP__SMTP_PORT
value: '587'
- name: AIRFLOW__KUBERNETES__WORKER_CONTAINER_TAG
value: docker image tag
- name: AIRFLOW__KUBERNETES__GIT_SYNC_RUN_AS_USER
value: '65533'
- name: AIRFLOW__CORE__ENABLE_XCOM_PICKLING
value: 'True'
- name: AIRFLOW__KUBERNETES__LOGS_VOLUME_CLAIM
value: dw-airflow-dev-logs
- name: AIRFLOW__KUBERNETES__RUN_AS_USER
value: '50000'
- name: AIRFLOW__KUBERNETES__DAGS_IN_IMAGE
value: 'False'
- name: AIRFLOW__SCHEDULER__SCHEDULE_AFTER_TASK_EXECUTION
value: 'False'
- name: AIRFLOW__SMTP__SMTP_MAIL_FROM
value: email id
- name: AIRFLOW__CORE__LOAD_EXAMPLES
value: 'False'
- name: AIRFLOW__SMTP__SMTP_PASSWORD
value: xxxxxxxxx
- name: AIRFLOW__SMTP__SMTP_HOST
value: smtp-relay.gmail.com
- name: AIRFLOW__KUBERNETES__NAMESPACE
value: dev
- name: AIRFLOW__SMTP__SMTP_USER
value: xxxxxxxxxx
- name: AIRFLOW__CORE__EXECUTOR
value: LocalExecutor
- name: AIRFLOW_HOME
value: /opt/airflow
- name: AIRFLOW__CORE__DAGS_FOLDER
value: /opt/airflow/dags
- name: AIRFLOW__KUBERNETES__GIT_DAGS_FOLDER_MOUNT_POINT
value: /opt/airflow/dags
- name: AIRFLOW__KUBERNETES__FS_GROUP
value: "50000"
- name: AIRFLOW__CORE__SQL_ALCHEMY_CONN
valueFrom:
secretKeyRef:
key: connection
name: airflow-dev-airflow-metadata
- name: AIRFLOW_CONN_AIRFLOW_DB
valueFrom:
secretKeyRef:
key: connection
name: airflow-dev-airflow-metadata
- name: AIRFLOW__CORE__FERNET_KEY
valueFrom:
secretKeyRef:
key: fernet-key
name: airflow-dev-fernet-key
envFrom: []
image: docker image
imagePullPolicy: IfNotPresent
name: base
ports: []
volumeMounts:
- mountPath: /opt/airflow/dags
name: airflow-dags
readOnly: false
subPath: /repo
- mountPath: /opt/airflow/logs
name: airflow-logs
- mountPath: /etc/git-secret/ssh
name: git-sync-ssh-key
subPath: ssh
- mountPath: /opt/airflow/airflow.cfg
name: airflow-config
readOnly: true
subPath: airflow.cfg
- mountPath: /opt/airflow/config/airflow_local_settings.py
name: airflow-config
readOnly: true
subPath: airflow_local_settings.py
hostNetwork: false
imagePullSecrets:
- name: airflow-dev-registry
initContainers:
- env:
- name: GIT_SYNC_REPO
value: xxxxxxxxxxxxx
- name: GIT_SYNC_BRANCH
value: master
- name: GIT_SYNC_ROOT
value: /git
- name: GIT_SYNC_DEST
value: repo
- name: GIT_SYNC_DEPTH
value: '1'
- name: GIT_SYNC_ONE_TIME
value: 'true'
- name: GIT_SYNC_REV
value: HEAD
- name: GIT_SSH_KEY_FILE
value: /etc/git-secret/ssh
- name: GIT_SYNC_ADD_USER
value: 'true'
- name: GIT_SYNC_SSH
value: 'true'
- name: GIT_KNOWN_HOSTS
value: 'false'
image: k8s.gcr.io/git-sync:v3.1.6
name: git-sync
securityContext:
runAsUser: 65533
volumeMounts:
- mountPath: /git
name: airflow-dags
readOnly: false
- mountPath: /etc/git-secret/ssh
name: git-sync-ssh-key
subPath: ssh
nodeSelector: {}
restartPolicy: Never
securityContext:
fsGroup: 50000
runAsUser: 50000
serviceAccountName: airflow-dev-worker-serviceaccount
volumes:
- emptyDir: {}
name: airflow-dags
- name: airflow-logs
persistentVolumeClaim:
claimName: dw-airflow-dev-logs
- name: git-sync-ssh-key
secret:
items:
- key: gitSshKey
mode: `444`
path: ssh
secretName: airflow-private-dags-dev
- configMap:
name: airflow-dev-airflow-config
name: airflow-config [](url)

Related

kubernetess multiple deployments using one code base but different configuration (environement variables)

I have a project where we are consuming data from kafka and publishing to mongo. In fact the code base does only one task, may be mongo to kafka migration, kafka to mongo migration or something else.
we have to consume from different kafka topics and publish to different mongo collections. Now these are parallel streams of work.
Current design is to have one codebase which can consume from Any topic and publish to Any mongo collection which is configurable using Environment variables. So we created One kubernetes Pod and have multiple containers inside it. each container has different environment variables.
My questions:
Is it wise to use multiple containers in one pod. Easy to distinguish, but as they are tightly coupled , i am guessing high chance of failure and not actually proper microservice design.
Should I create multiple deployments for each of these pipelines ? Would be very difficult to maintain as each will have different deployment configs.
Is there any better way to address this ?
Sample of step 1:
apiVersion: apps/v1
kind: Deployment
metadata:
annotations: {}
name: test-raw-mongodb-sink-apps
namespace: test-apps
spec:
selector:
matchLabels:
app: test-raw-mongodb-sink-apps
template:
metadata:
labels:
app: test-raw-mongodb-sink-apps
spec:
containers:
- env:
- name: EVENTS_TOPIC
value: test.ops.proc-events
- name: GROUP_ID
value: test-mongodb-sink-alchemy
- name: INPUT_TOPIC
value: test.raw.ptv.alchemy
- name: MONGODB_AUTH_DB
value: admin
- name: MONGODB_HOST0
value: test-mongodb-0.test-mongodb-headless.test-infra
- name: MONGODB_HOST1
value: test-mongodb-1.test-mongodb-headless.test-infra
- name: MONGODB_PORT
value: "27017"
- name: MONGODB_PASSWORD
value: test123
- name: MONGODB_USERNAME
value: root
- name: SERVER_PORT
value: "8081"
- name: KAFKA_BROKERS
value: kafka-cluster-kafka-bootstrap.kafka:9093
- name: TRUSTSTORE_PASSWORD
valueFrom:
secretKeyRef:
key: ca.password
name: kafka-ca-cert
- name: KEYSTORE_PASSWORD
valueFrom:
secretKeyRef:
key: user.password
name: kafka
image: tools.testCompany.co.za:8093/dpl/tt--mongodb-map:0.0.7.0-SNAPSHOT
name: test-mongodb-sink-alchemy
securityContext:
allowPrivilegeEscalation: true
privileged: true
volumeMounts:
- mountPath: /app/resources
name: properties
- mountPath: /stores
name: stores
readOnly: true
- env:
- name: EVENTS_TOPIC
value: test.ops.proc-events
- name: GROUP_ID
value: test-mongodb-sink-bloomberg
- name: INPUT_TOPIC
value: test.raw.pretrade.bloomberg
- name: MONGODB_AUTH_DB
value: admin
- name: MONGODB_HOST0
value: test-mongodb-0.test-mongodb-headless.test-infra
- name: MONGODB_HOST1
value: test-mongodb-1.test-mongodb-headless.test-infra
- name: MONGODB_PASSWORD
value: test123
- name: MONGODB_PORT
value: "27017"
- name: MONGODB_USERNAME
value: root
- name: SERVER_PORT
value: "8082"
- name: KAFKA_BROKERS
value: kafka-cluster-kafka-bootstrap.kafka:9093
- name: TRUSTSTORE_PASSWORD
valueFrom:
secretKeyRef:
key: ca.password
name: kafka-ca-cert
- name: KEYSTORE_PASSWORD
valueFrom:
secretKeyRef:
key: user.password
name: kafka
image: tools.testCompany.co.za:8093/local/tt--mongodb-map:0.0.7.0-SNAPSHOT
name: test-mongodb-sink-bloomberg
securityContext:
allowPrivilegeEscalation: true
privileged: true
volumeMounts:
- mountPath: /app/resources
name: properties
- mountPath: /stores
name: stores
readOnly: true
- env:
- name: EVENTS_TOPIC
value: test.ops.proc-events
- name: GROUP_ID
value: test-mongodb-sink-calypso
- name: INPUT_TOPIC
value: test.raw.ptv.calypso
- name: MONGODB_AUTH_DB
value: admin
- name: MONGODB_HOST0
value: test-mongodb-0.test-mongodb-headless.test-infra
- name: MONGODB_HOST1
value: test-mongodb-1.test-mongodb-headless.test-infra
- name: MONGODB_PASSWORD
value: test123
- name: MONGODB_PORT
value: "27017"
- name: MONGODB_USERNAME
value: root
- name: SERVER_PORT
value: "8083"
- name: KAFKA_BROKERS
value: kafka-cluster-kafka-bootstrap.kafka:9093
- name: TRUSTSTORE_PASSWORD
valueFrom:
secretKeyRef:
key: ca.password
name: kafka-ca-cert
- name: KEYSTORE_PASSWORD
valueFrom:
secretKeyRef:
key: user.password
name: kafka
image: tools.testCompany.co.za:8093/local/tt--mongodb-map:0.0.7.0-SNAPSHOT
name: test-mongodb-sink-calypso
securityContext:
allowPrivilegeEscalation: true
privileged: true
volumeMounts:
- mountPath: /app/resources
name: properties
- mountPath: /stores
name: stores
readOnly: true
- env:
- name: EVENTS_TOPIC
value: test.ops.proc-events
- name: GROUP_ID
value: test-mongodb-sink-dtres
- name: INPUT_TOPIC
value: test.raw.ptv.dtres
- name: MONGODB_AUTH_DB
value: admin
- name: MONGODB_HOST0
value: test-mongodb-0.test-mongodb-headless.test-infra
- name: MONGODB_HOST1
value: test-mongodb-1.test-mongodb-headless.test-infra
- name: MONGODB_PASSWORD
value: test123
- name: MONGODB_PORT
value: "27017"
- name: MONGODB_USERNAME
value: root
- name: SERVER_PORT
value: "8084"
- name: KAFKA_BROKERS
value: kafka-cluster-kafka-bootstrap.kafka:9093
- name: TRUSTSTORE_PASSWORD
valueFrom:
secretKeyRef:
key: ca.password
name: kafka-ca-cert
- name: KEYSTORE_PASSWORD
valueFrom:
secretKeyRef:
key: user.password
name: kafka
image: tools.testCompany.co.za:8093/local/tt--mongodb-map:0.0.7.0-SNAPSHOT
name: test-mongodb-sink-dtres
securityContext:
allowPrivilegeEscalation: true
privileged: true
volumeMounts:
- mountPath: /app/resources
name: properties
- mountPath: /stores
name: stores
readOnly: true
- env:
- name: EVENTS_TOPIC
value: test.ops.proc-events
- name: GROUP_ID
value: test-mongodb-sink-feds
- name: INPUT_TOPIC
value: test.raw.ptv.feds
- name: MONGODB_AUTH_DB
value: admin
- name: MONGODB_HOST0
value: test-mongodb-0.test-mongodb-headless.test-infra
- name: MONGODB_HOST1
value: test-mongodb-1.test-mongodb-headless.test-infra
- name: MONGODB_PASSWORD
value: test123
- name: MONGODB_PORT
value: "27017"
- name: MONGODB_USERNAME
value: root
- name: SERVER_PORT
value: "8085"
- name: KAFKA_BROKERS
value: kafka-cluster-kafka-bootstrap.kafka:9093
- name: TRUSTSTORE_PASSWORD
valueFrom:
secretKeyRef:
key: ca.password
name: kafka-ca-cert
- name: KEYSTORE_PASSWORD
valueFrom:
secretKeyRef:
key: user.password
name: kafka
image: tools.testCompany.co.za:8093/local/tt--mongodb-map:0.0.7.0-SNAPSHOT
name: test-mongodb-sink-feds
securityContext:
allowPrivilegeEscalation: true
privileged: true
volumeMounts:
- mountPath: /app/resources
name: properties
- mountPath: /stores
name: stores
readOnly: true
- env:
- name: EVENTS_TOPIC
value: test.ops.proc-events
- name: GROUP_ID
value: test-mongodb-sink-hoops
- name: INPUT_TOPIC
value: test.raw.ptv.hoops
- name: MONGODB_AUTH_DB
value: admin
- name: MONGODB_HOST0
value: test-mongodb-0.test-mongodb-headless.test-infra
- name: MONGODB_HOST1
value: test-mongodb-1.test-mongodb-headless.test-infra
- name: MONGODB_PASSWORD
value: test123
- name: MONGODB_PORT
value: "27017"
- name: MONGODB_USERNAME
value: root
- name: SERVER_PORT
value: "8086"
- name: KAFKA_BROKERS
value: kafka-cluster-kafka-bootstrap.kafka:9093
- name: TRUSTSTORE_PASSWORD
valueFrom:
secretKeyRef:
key: ca.password
name: kafka-ca-cert
- name: KEYSTORE_PASSWORD
valueFrom:
secretKeyRef:
key: user.password
name: kafka
image: tools.testCompany.co.za:8093/local/tt--mongodb-map:0.0.7.0-SNAPSHOT
name: test-mongodb-sink-hoops
securityContext:
allowPrivilegeEscalation: true
privileged: true
volumeMounts:
- mountPath: /app/resources
name: properties
- mountPath: /stores
name: stores
readOnly: true
- env:
- name: EVENTS_TOPIC
value: test.ops.proc-events
- name: GROUP_ID
value: test-mongodb-sink-mxcore
- name: INPUT_TOPIC
value: test.raw.ptv.murex_core
- name: MONGODB_AUTH_DB
value: admin
- name: MONGODB_HOST0
value: test-mongodb-0.test-mongodb-headless.test-infra
- name: MONGODB_HOST1
value: test-mongodb-1.test-mongodb-headless.test-infra
- name: MONGODB_PASSWORD
value: test123
- name: MONGODB_PORT
value: "27017"
- name: MONGODB_USERNAME
value: root
- name: SERVER_PORT
value: "8087"
- name: KAFKA_BROKERS
value: kafka-cluster-kafka-bootstrap.kafka:9093
- name: TRUSTSTORE_PASSWORD
valueFrom:
secretKeyRef:
key: ca.password
name: kafka-ca-cert
- name: KEYSTORE_PASSWORD
valueFrom:
secretKeyRef:
key: user.password
name: kafka
image: tools.testCompany.co.za:8093/local/tt--mongodb-map:0.0.7.0-SNAPSHOT
name: test-mongodb-sink-mxcore
securityContext:
allowPrivilegeEscalation: true
privileged: true
volumeMounts:
- mountPath: /app/resources
name: properties
- mountPath: /stores
name: stores
readOnly: true
- env:
- name: EVENTS_TOPIC
value: test.ops.proc-events
- name: GROUP_ID
value: test-mongodb-sink-mxeqd
- name: INPUT_TOPIC
value: test.raw.ptv.murex_eqd_sa
- name: MONGODB_AUTH_DB
value: admin
- name: MONGODB_HOST0
value: test-mongodb-0.test-mongodb-headless.test-infra
- name: MONGODB_HOST1
value: test-mongodb-1.test-mongodb-headless.test-infra
- name: MONGODB_PASSWORD
value: test123
- name: MONGODB_PORT
value: "27017"
- name: MONGODB_USERNAME
value: root
- name: SERVER_PORT
value: "8088"
- name: KAFKA_BROKERS
value: kafka-cluster-kafka-bootstrap.kafka:9093
- name: TRUSTSTORE_PASSWORD
valueFrom:
secretKeyRef:
key: ca.password
name: kafka-ca-cert
- name: KEYSTORE_PASSWORD
valueFrom:
secretKeyRef:
key: user.password
name: kafka
image: tools.testCompany.co.za:8093/local/tt--mongodb-map:0.0.7.0-SNAPSHOT
name: test-mongodb-sink-mxeqd
securityContext:
allowPrivilegeEscalation: true
privileged: true
volumeMounts:
- mountPath: /app/resources
name: properties
- mountPath: /stores
name: stores
readOnly: true
- env:
- name: EVENTS_TOPIC
value: test.ops.proc-events
- name: GROUP_ID
value: test-mongodb-sink-mxgts
- name: INPUT_TOPIC
value: test.raw.ptv.murex_gts_sa
- name: MONGODB_AUTH_DB
value: admin
- name: MONGODB_HOST0
value: test-mongodb-0.test-mongodb-headless.test-infra
- name: MONGODB_HOST1
value: test-mongodb-1.test-mongodb-headless.test-infra
- name: MONGODB_PASSWORD
value: test123
- name: MONGODB_PORT
value: "27017"
- name: MONGODB_USERNAME
value: root
- name: SERVER_PORT
value: "8089"
- name: KAFKA_BROKERS
value: kafka-cluster-kafka-bootstrap.kafka:9093
- name: TRUSTSTORE_PASSWORD
valueFrom:
secretKeyRef:
key: ca.password
name: kafka-ca-cert
- name: KEYSTORE_PASSWORD
valueFrom:
secretKeyRef:
key: user.password
name: kafka
image: tools.testCompany.co.za:8093/local/tt--mongodb-map:0.0.7.0-SNAPSHOT
name: test-mongodb-sink-mxgts
securityContext:
allowPrivilegeEscalation: true
privileged: true
volumeMounts:
- mountPath: /app/resources
name: properties
- mountPath: /stores
name: stores
readOnly: true
- env:
- name: EVENTS_TOPIC
value: test.ops.proc-events
- name: GROUP_ID
value: test-mongodb-sink-mxmr
- name: INPUT_TOPIC
value: test.raw.ptv.murex_mr
- name: MONGODB_AUTH_DB
value: admin
- name: MONGODB_HOST0
value: test-mongodb-0.test-mongodb-headless.test-infra
- name: MONGODB_HOST1
value: test-mongodb-1.test-mongodb-headless.test-infra
- name: MONGODB_PASSWORD
value: test123
- name: MONGODB_PORT
value: "27017"
- name: MONGODB_USERNAME
value: root
- name: SERVER_PORT
value: "8090"
- name: KAFKA_BROKERS
value: kafka-cluster-kafka-bootstrap.kafka:9093
- name: TRUSTSTORE_PASSWORD
valueFrom:
secretKeyRef:
key: ca.password
name: kafka-ca-cert
- name: KEYSTORE_PASSWORD
valueFrom:
secretKeyRef:
key: user.password
name: kafka
image: tools.testCompany.co.za:8093/local/tt--mongodb-map:0.0.7.0-SNAPSHOT
name: test-mongodb-sink-mxmr
securityContext:
allowPrivilegeEscalation: true
privileged: true
volumeMounts:
- mountPath: /app/resources
name: properties
- mountPath: /stores
name: stores
readOnly: true
- env:
- name: EVENTS_TOPIC
value: test.ops.proc-events
- name: GROUP_ID
value: test-mongodb-sink-mxgtscf
- name: INPUT_TOPIC
value: test.raw.cashflow.murex_gts_sa
- name: MONGODB_AUTH_DB
value: admin
- name: MONGODB_HOST0
value: test-mongodb-0.test-mongodb-headless.test-infra
- name: MONGODB_HOST1
value: test-mongodb-1.test-mongodb-headless.test-infra
- name: MONGODB_PASSWORD
value: test123
- name: MONGODB_PORT
value: "27017"
- name: MONGODB_USERNAME
value: root
- name: SERVER_PORT
value: "8091"
- name: KAFKA_BROKERS
value: kafka-cluster-kafka-bootstrap.kafka:9093
- name: TRUSTSTORE_PASSWORD
valueFrom:
secretKeyRef:
key: ca.password
name: kafka-ca-cert
- name: KEYSTORE_PASSWORD
valueFrom:
secretKeyRef:
key: user.password
name: kafka
image: tools.testCompany.co.za:8093/local/tt--mongodb-map:0.0.7.0-SNAPSHOT
name: test-mongodb-sink-mxgtscf
securityContext:
allowPrivilegeEscalation: true
privileged: true
volumeMounts:
- mountPath: /app/resources
name: properties
- mountPath: /stores
name: stores
readOnly: true
- env:
- name: EVENTS_TOPIC
value: test.ops.proc-events
- name: GROUP_ID
value: test-mongodb-sink-mxcoll
- name: INPUT_TOPIC
value: test.raw.collateral.mxcoll
- name: MONGODB_AUTH_DB
value: admin
- name: MONGODB_HOST0
value: test-mongodb-0.test-mongodb-headless.test-infra
- name: MONGODB_HOST1
value: test-mongodb-1.test-mongodb-headless.test-infra
- name: MONGODB_PASSWORD
value: test123
- name: MONGODB_PORT
value: "27017"
- name: MONGODB_USERNAME
value: root
- name: SERVER_PORT
value: "8092"
- name: KAFKA_BROKERS
value: kafka-cluster-kafka-bootstrap.kafka:9093
- name: TRUSTSTORE_PASSWORD
valueFrom:
secretKeyRef:
key: ca.password
name: kafka-ca-cert
- name: KEYSTORE_PASSWORD
valueFrom:
secretKeyRef:
key: user.password
name: kafka
image: tools.testCompany.co.za:8093/local/tt--mongodb-map:0.0.7.0-SNAPSHOT
name: test-mongodb-sink-mxcoll
securityContext:
allowPrivilegeEscalation: true
privileged: true
volumeMounts:
- mountPath: /app/resources
name: properties
- mountPath: /stores
name: stores
readOnly: true
- env:
- name: EVENTS_TOPIC
value: test.ops.proc-events
- name: GROUP_ID
value: test-mongodb-sink-mxcoll-link
- name: INPUT_TOPIC
value: test.raw.collateral.mxcoll_link
- name: MONGODB_AUTH_DB
value: admin
- name: MONGODB_HOST0
value: test-mongodb-0.test-mongodb-headless.test-infra
- name: MONGODB_HOST1
value: test-mongodb-1.test-mongodb-headless.test-infra
- name: MONGODB_PASSWORD
value: test123
- name: MONGODB_PORT
value: "27017"
- name: MONGODB_USERNAME
value: root
- name: SERVER_PORT
value: "8093"
- name: KAFKA_BROKERS
value: kafka-cluster-kafka-bootstrap.kafka:9093
- name: TRUSTSTORE_PASSWORD
valueFrom:
secretKeyRef:
key: ca.password
name: kafka-ca-cert
- name: KEYSTORE_PASSWORD
valueFrom:
secretKeyRef:
key: user.password
name: kafka
image: tools.testCompany.co.za:8093/local/tt--mongodb-map:0.0.7.0-SNAPSHOT
name: test-mongodb-sink-mxcoll-link
securityContext:
allowPrivilegeEscalation: true
privileged: true
volumeMounts:
- mountPath: /app/resources
name: properties
- mountPath: /stores
name: stores
readOnly: true
- env:
- name: EVENTS_TOPIC
value: test.ops.proc-events
- name: GROUP_ID
value: test-mongodb-sink-ost
- name: INPUT_TOPIC
value: test.raw.ptv.ost
- name: MONGODB_AUTH_DB
value: admin
- name: MONGODB_HOST0
value: test-mongodb-0.test-mongodb-headless.test-infra
- name: MONGODB_HOST1
value: test-mongodb-1.test-mongodb-headless.test-infra
- name: MONGODB_PASSWORD
value: test123
- name: MONGODB_PORT
value: "27017"
- name: MONGODB_USERNAME
value: root
- name: SERVER_PORT
value: "8094"
- name: KAFKA_BROKERS
value: kafka-cluster-kafka-bootstrap.kafka:9093
- name: TRUSTSTORE_PASSWORD
valueFrom:
secretKeyRef:
key: ca.password
name: kafka-ca-cert
- name: KEYSTORE_PASSWORD
valueFrom:
secretKeyRef:
key: user.password
name: kafka
image: tools.testCompany.co.za:8093/local/tt--mongodb-map:0.0.7.0-SNAPSHOT
name: test-mongodb-sink-ost
securityContext:
allowPrivilegeEscalation: true
privileged: true
volumeMounts:
- mountPath: /app/resources
name: properties
- mountPath: /stores
name: stores
readOnly: true
- env:
- name: EVENTS_TOPIC
value: test.ops.proc-events
- name: GROUP_ID
value: test-mongodb-sink-posmon
- name: INPUT_TOPIC
value: test.raw.ptp.posmon
- name: MONGODB_AUTH_DB
value: admin
- name: MONGODB_HOST0
value: test-mongodb-0.test-mongodb-headless.test-infra
- name: MONGODB_HOST1
value: test-mongodb-1.test-mongodb-headless.test-infra
- name: MONGODB_PASSWORD
value: test123
- name: MONGODB_PORT
value: "27017"
- name: MONGODB_USERNAME
value: root
- name: SERVER_PORT
value: "8095"
- name: KAFKA_BROKERS
value: kafka-cluster-kafka-bootstrap.kafka:9093
- name: TRUSTSTORE_PASSWORD
valueFrom:
secretKeyRef:
key: ca.password
name: kafka-ca-cert
- name: KEYSTORE_PASSWORD
valueFrom:
secretKeyRef:
key: user.password
name: kafka
image: tools.testCompany.co.za:8093/local/tt--mongodb-map:0.0.7.0-SNAPSHOT
name: test-mongodb-sink-posmon
securityContext:
allowPrivilegeEscalation: true
privileged: true
volumeMounts:
- mountPath: /app/resources
name: properties
- mountPath: /stores
name: stores
readOnly: true
Thanks
A templating tool like Helm will let you fill in the environment-variable values from deploy-time settings. In Helm this would look like:
env:
- name: EVENTS_TOPIC
value: {{ .Values.eventsTopic }}
- name: GROUP_ID
value: {{ .Values.groupId }}
- name: INPUT_TOPIC
value: {{ .Values.inputTopic }}
You could then deploy this multiple times with different sets of topics:
helm install alchemy . \
--set eventsTopic=test.ops.proc-events \
--set groupId=test-mongodb-sink-alchemy \
--set inputTopic=test.raw.ptv.alchemy
helm install bloomberg . \
--set eventsTopic=test.ops.proc-events \
--set groupId=test-mongodb-sink-bloomberg \
--set inputTopic=test.raw.pretrade.bloomberg
You could write the Helm chart to be configured with a list of topic sets, too, and only deploy the set once:
{{- $top := . -}}{{-/* because "range" overwrites "." */-}}
{{- $topic := range $topics -}}
---
apiVersion: v1
kind: Deployment
metadata:
name: {{ $topic.name }}
spec:
...
env:
- name: EVENT_TOPIC
value: {{ $top.Values.eventTopic }}{{/* common to all deployments */}}
- name: GROUP_ID
value: test-mongodb-sink-{{ $topic.name }}
- name: INPUT_TOPIC
value: {{ $topic.inputTopic }}
Write configuration like:
eventTopic: test.ops.proc-events
topics:
- name: alchemy
inputTopic: test.raw.ptv.alchemy
- name: bloomberg
inputTopic: test.raw.pretrade.bloomberg
And deploy like:
helm install connector . -f topic-listing.yaml
In any case, you will want only one container per pod. There are a couple of reasons for this. If the list of topics ever changes, this lets you create or delete deployments without interfering with the other topics; if everything was in a single pod, you'd have to stop and restart everything together, and it can take Kafka a minute or two to figure out what happens. In a Kafka context, you can also run as many consumers as there are partitions on a topic, but not really more; if you have a very busy topic you can easily set that deployment's replicas: to have multiple consumers for multiple partitions, but if everything together is in one pod, your only choice is to scale everything together.
Is it wise to use multiple containers in one pod. Easy to distinguish, but as they are tightly coupled , i am guessing high chance of failure and not actually proper microservice design.
You most likely want to deploy them as separate services, so that you can update or re-configure them independently of eachother.
Should I create multiple deployments for each of these pipelines ? Would be very difficult to maintain as each will have different deployment configs.
Kustomize is a built-in tool in kubectl that is a good choice when you want to deploy the same manifest in multiple environments with different configurations. This solution require no additional tool other than kubectl.
Deploying to multiple environments with Kustomize
Directory structure:
base/
- deployment.yaml # fully deployable manifest - no templating
- kustomization.yaml # default values e.g. for dev environment
app1/
- kustomization.yaml # specific values for app1
app2/
- kustomization.yaml # specific values for app2
Example Deployment manifest with Kustomization
Here, the environment variables is loaded from a ConfigMap such that we can use configMapGenerator. This file is base/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongodb-sink
namespace: test-apps
spec:
template: // some fiels, e.g. labels are omitted in example
spec:
containers:
- name: mongodb-sink
image: mongodb-map:0.0.7.0-SNAPSHOT
env:
- name: MONGODB_HOST0
value: test-mongodb-0.test-mongodb-headless.test-infra
- name: MONGODB_HOST1
value: test-mongodb-1.test-mongodb-headless.test-infra
- name: GROUP_ID
valueFrom:
configMapKeyRef:
name: my-values
key: GROUP_ID
- name: INPUT_TOPIC
valueFrom:
configMapKeyRef:
name: my-values
key: INPUT_TOPIC
...
Also add a base/kustomization.yaml file to describe the configMapGenerator and related files.
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- deployment.yaml
configMapGenerator:
- name: my-values
behavior: replace
literals:
- GROUP_ID=test-mongodb-sink-calypso
- INPUT_TOPIC=test.raw.ptv.calypso
... # also add your other values
Preview Manifests
kubectl kustomize base/
Apply Manifests
kubectl apply -k base/
Add config for app1 and app2
With app1 we now want to use the manifest we have in base/ and just overlay what is different for app1. This file is app1/kustomization.yaml and similar for app2/kustomization.yaml.
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
bases:
- ../base
namePrefix: bloomberg-sink- # this gives your Deployment a prefixed name
configMapGenerator:
- name: my-values
behavior: replace
literals:
- GROUP_ID=test-mongodb-sink-bloomberg
- INPUT_TOPIC=test.raw.pretrade.bloomberg
... # also add your other values
Preview Manifests
kubectl kustomize app1/
Apply Manifests
kubectl apply -k app1/
Documentation
Kubernetes: Declarative Management of Kubernetes Objects Using Kustomize
SIG CLI: Kustomization file

Why are podman pods not reproducible using kubernetes yaml file?

I created a pod following a RedHat blog post and created a subsequent pod using the YAML file
Post: https://www.redhat.com/sysadmin/compose-podman-pods
When creating the pod using the commands, the pod works fine (can access localhost:8080)
When creating the pod using the YAML file, I get error 403 forbidden
I have tried this on two different hosts (both creating pod from scratch and using YAML), deleting all images and pod each time to make sure nothing was influencing the process
I'm using podman 2.0.4 on Ubuntu 20.04
Commands:
podman create --name wptestpod -p 8080:80
podman run \
-d --restart=always --pod=wptestpod \
-e MYSQL_ROOT_PASSWORD="myrootpass" \
-e MYSQL_DATABASE="wp" \
-e MYSQL_USER="wordpress" \
-e MYSQL_PASSWORD="w0rdpr3ss" \
--name=wptest-db mariadb
podman run \
-d --restart=always --pod=wptestpod \
-e WORDPRESS_DB_NAME="wp" \
-e WORDPRESS_DB_USER="wordpress" \
-e WORDPRESS_DB_PASSWORD="w0rdpr3ss" \
-e WORDPRESS_DB_HOST="127.0.0.1" \
--name wptest-web wordpress
Original YAML file from podman generate kube wptestpod > wptestpod.yaml:
# Generation of Kubernetes YAML is still under development!
#
# Save the output of this file and use kubectl create -f to import
# it into Kubernetes.
#
# Created with podman-2.0.4
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: '2020-08-26T17:02:56Z'
labels:
app: wptestpod
name: wptestpod
spec:
containers:
- command:
- apache2-foreground
env:
- name: PATH
value: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
- name: TERM
value: xterm
- name: container
value: podman
- name: WORDPRESS_DB_NAME
value: wp
- name: WORDPRESS_DB_USER
value: wordpress
- name: APACHE_CONFDIR
value: /etc/apache2
- name: PHP_LDFLAGS
value: -Wl,-O1 -pie
- name: PHP_VERSION
value: 7.4.9
- name: PHP_EXTRA_CONFIGURE_ARGS
value: --with-apxs2 --disable-cgi
- name: GPG_KEYS
value: 42670A7FE4D0441C8E4632349E4FDC074A4EF02D 5A52880781F755608BF815FC910DEB46F53EA312
- name: WORDPRESS_DB_PASSWORD
value: t3stp4ssw0rd
- name: APACHE_ENVVARS
value: /etc/apache2/envvars
- name: PHP_ASC_URL
value: https://www.php.net/distributions/php-7.4.9.tar.xz.asc
- name: PHP_SHA256
value: 23733f4a608ad1bebdcecf0138ebc5fd57cf20d6e0915f98a9444c3f747dc57b
- name: PHP_URL
value: https://www.php.net/distributions/php-7.4.9.tar.xz
- name: WORDPRESS_DB_HOST
value: 127.0.0.1
- name: PHP_CPPFLAGS
value: -fstack-protector-strong -fpic -fpie -O2 -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64
- name: PHP_MD5
- name: PHP_EXTRA_BUILD_DEPS
value: apache2-dev
- name: PHP_CFLAGS
value: -fstack-protector-strong -fpic -fpie -O2 -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64
- name: WORDPRESS_SHA1
value: 03fe1a139b3cd987cc588ba95fab2460cba2a89e
- name: PHPIZE_DEPS
value: "autoconf \t\tdpkg-dev \t\tfile \t\tg++ \t\tgcc \t\tlibc-dev \t\tmake \t\tpkg-config \t\tre2c"
- name: WORDPRESS_VERSION
value: '5.5'
- name: PHP_INI_DIR
value: /usr/local/etc/php
- name: HOSTNAME
value: wptestpod
image: docker.io/library/wordpress:latest
name: wptest-web
ports:
- containerPort: 80
hostPort: 8080
protocol: TCP
resources: {}
securityContext:
allowPrivilegeEscalation: true
capabilities: {}
privileged: false
readOnlyRootFilesystem: false
seLinuxOptions: {}
workingDir: /var/www/html
- command:
- mysqld
env:
- name: PATH
value: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
- name: TERM
value: xterm
- name: container
value: podman
- name: MYSQL_PASSWORD
value: t3stp4ssw0rd
- name: GOSU_VERSION
value: '1.12'
- name: GPG_KEYS
value: 177F4010FE56CA3336300305F1656F24C74CD1D8
- name: MARIADB_MAJOR
value: '10.5'
- name: MYSQL_ROOT_PASSWORD
value: t3stp4ssw0rd
- name: MARIADB_VERSION
value: 1:10.5.5+maria~focal
- name: MYSQL_DATABASE
value: wp
- name: MYSQL_USER
value: wordpress
- name: HOSTNAME
value: wptestpod
image: docker.io/library/mariadb:latest
name: wptest-db
resources: {}
securityContext:
allowPrivilegeEscalation: true
capabilities: {}
privileged: false
readOnlyRootFilesystem: false
seLinuxOptions: {}
workingDir: /
status: {}
---
metadata:
creationTimestamp: null
spec: {}
status:
loadBalancer: {}
YAML file with certain envs removed (taken from blog post):
# Generation of Kubernetes YAML is still under development!
#
# Save the output of this file and use kubectl create -f to import
# it into Kubernetes.
#
# Created with podman-1.9.3
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: "2020-07-01T20:17:42Z"
labels:
app: wptestpod
name: wptestpod
spec:
containers:
- name: wptest-web
env:
- name: WORDPRESS_DB_NAME
value: wp
- name: WORDPRESS_DB_HOST
value: 127.0.0.1
- name: WORDPRESS_DB_USER
value: wordpress
- name: WORDPRESS_DB_PASSWORD
value: w0rdpr3ss
image: docker.io/library/wordpress:latest
ports:
- containerPort: 80
hostPort: 8080
protocol: TCP
resources: {}
securityContext:
allowPrivilegeEscalation: true
capabilities: {}
privileged: false
readOnlyRootFilesystem: false
seLinuxOptions: {}
workingDir: /var/www/html
- name: wptest-db
env:
- name: MYSQL_ROOT_PASSWORD
value: myrootpass
- name: MYSQL_USER
value: wordpress
- name: MYSQL_PASSWORD
value: w0rdpr3ss
- name: MYSQL_DATABASE
value: wp
image: docker.io/library/mariadb:latest
resources: {}
securityContext:
allowPrivilegeEscalation: true
capabilities: {}
privileged: false
readOnlyRootFilesystem: false
seLinuxOptions: {}
workingDir: /
status: {}
Can anyone see why this pod would not work when created using the YAML file, but works fine when created using the commands? It seems like a good workflow, but it's useless if the pods produced with the YAML are non-functional.
I found the same article, and the same problem than you. None of the following tests worked for me:
Add and remove environment variables
Add and remove restartPolicy part
Play with the capabilities part
As soon as you move back the command part, everything fires up again.
Check it with the following wordpress.yaml:
# Generation of Kubernetes YAML is still under development!
#
# Save the output of this file and use kubectl create -f to import
# it into Kubernetes.
#
# Created with podman-2.2.1
apiVersion: v1
kind: Pod
metadata:
labels:
app: wordpress-pod
name: wordpress-pod
spec:
containers:
- command:
- apache2-foreground
name: wptest-web
env:
- name: WORDPRESS_DB_NAME
value: wp
- name: WORDPRESS_DB_HOST
value: 127.0.0.1
- name: WORDPRESS_DB_USER
value: wordpress
- name: WORDPRESS_DB_PASSWORD
value: w0rdpr3ss
image: docker.io/library/wordpress:latest
ports:
- containerPort: 80
hostPort: 8080
protocol: TCP
resources: {}
securityContext:
allowPrivilegeEscalation: true
capabilities: {}
privileged: false
readOnlyRootFilesystem: false
seLinuxOptions: {}
workingDir: /var/www/html
- command:
- mysqld
name: wptest-db
env:
- name: MYSQL_ROOT_PASSWORD
value: myrootpass
- name: MYSQL_USER
value: wordpress
- name: MYSQL_PASSWORD
value: w0rdpr3ss
- name: MYSQL_DATABASE
value: wp
image: docker.io/library/mariadb:latest
resources: {}
securityContext:
allowPrivilegeEscalation: true
capabilities: {}
privileged: false
readOnlyRootFilesystem: false
seLinuxOptions: {}
workingDir: /
status: {}
Play & checks:
# Create containers, pod and run everything
$ podman play kube wordpress.yaml
# Output
Pod:
5a211c35419b4fcf0deda718e47eec2dd10653a5c5bacc275c312ae75326e746
Containers:
bfd087b5649f8d1b3c62ef86f28f4bcce880653881bcda21823c09e0cca1c85b
5aceb11500db0a91b4db2cc4145879764e16ed0e8f95a2f85d9a55672f65c34b
# Check running state
$ podman container ls; podman pod ls
# Output
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5aceb11500db docker.io/library/mariadb:latest mysqld 13 seconds ago Up 10 seconds ago 0.0.0.0:8080->80/tcp wordpress-pod-wptest-db
bfd087b5649f docker.io/library/wordpress:latest apache2-foregroun... 16 seconds ago Up 10 seconds ago 0.0.0.0:8080->80/tcp wordpress-pod-wptest-web
d8bf33eede43 k8s.gcr.io/pause:3.2 19 seconds ago Up 11 seconds ago 0.0.0.0:8080->80/tcp 5a211c35419b-infra
POD ID NAME STATUS CREATED INFRA ID # OF CONTAINERS
5a211c35419b wordpress-pod Running 20 seconds ago d8bf33eede43 3
A bit more explanation about the bug:
The problem is that entrypoint and cmd are not parsed correctly from the images, as it should and you would expect. It was working on previous versions, and it is already identified and fixed for the future ones.
For complete reference:
Comment found at podman#8710-comment.748672710 breaks this problem into two pieces:
"make podman play use ENVs from image" (podman#8654 already fixed in mainstream)
"podman play should honour both ENTRYPOINT and CMD from image" (podman#8666)
This one is replaced by "play kube: fix args/command handling" (podman#8807 the one already merged to mainstream)

K8s: Error in applying yaml file after adding env values

The following yaml file works fine
apiVersion: apps/v1
kind: Deployment
metadata:
name: something
spec:
replicas: 2
selector:
matchLabels:
app: something
template:
metadata:
labels:
app: something
spec:
volumes:
- name: shared-logs
emptyDir: {}
containers:
- name: something
image: docker.io/manuchadha25/something
volumeMounts:
- name: shared-logs
mountPath: /deploy/codingjediweb-1.0/logs/
env:
- name: DB_CASSANDRA_URI
value: cassandra://34.91.5.44
- name: DB_PASSWORD
value: something
- name: DB_KEYSPACE_NAME
value: something
- name: DB_USERNAME
value: something
- name: EMAIL_SERVER
value: something
- name: EMAIL_USER
value: something
- name: EMAIL_PASSWORD
value: something
- name: ALLOWED_NODES
value: 34.105.134.5
ports:
- containerPort: 9000
#- name: logging
# image: busybox
#volumeMounts:
# - name: shared-logs
# mountPath: /deploy/codingjediweb-1.0/logs/
#command: ['sh', '-c', "while true; do sleep 86400; done"]
But when I add the following two lines in env section, I get error
apiVersion: apps/v1
kind: Deployment
metadata:
name: something
spec:
replicas: 2
selector:
matchLabels:
app: something
template:
metadata:
labels:
app: something
spec:
volumes:
- name: shared-logs
emptyDir: {}
containers:
- name: something
image: docker.io/manuchadha25/something
volumeMounts:
- name: shared-logs
mountPath: /deploy/codingjediweb-1.0/logs/
env:
- name: DB_CASSANDRA_URI
value: cassandra://34.91.5.44
- name: DB_CASSANDRA_PORT <--- NEW LINE
value: 9042<--- NEW LINE
- name: DB_PASSWORD
value: something
- name: DB_KEYSPACE_NAME
value: something
- name: DB_USERNAME
value: something
- name: EMAIL_SERVER
value: something
- name: EMAIL_USER
value: something
- name: EMAIL_PASSWORD
value: something
- name: ALLOWED_NODES
value: 34.105.134.5
ports:
- containerPort: 9000
#- name: logging
# image: busybox
#volumeMounts:
# - name: shared-logs
# mountPath: /deploy/codingjediweb-1.0/logs/
#command: ['sh', '-c', "while true; do sleep 86400; done"]
$ kubectl apply -f codingjediweb-nodes.yaml
Error from server (BadRequest): error when creating "codingjediweb-nodes.yaml": Deployment in version "v1" cannot be handled as a Deployment: v1.Deployment.Spec: v1.DeploymentSpec.Template: v1.PodTemplateSpec.Spec: v1.PodSpec.Containers: []v1.Container: v1.Container.Env: []v1.EnvVar: v1.EnvVar.Value: ReadString: expects " or n, but found 9, error found in #10 byte of ...|,"value":9042},{"nam|..., bigger context ...|.1.85.10"},{"name":"DB_CASSANDRA_PORT","value":9042},{"name":"DB_PASSWORD","value":"1GFGc1Q|...
The following website validates that the YAML is correct.
What am I doing wrong?
Could you please add 9042 in double qoutes “9042” and try this. I think it’s looking for string and getting numbers instead so please add the value in double quotes

Getting "cannot init crypto" while deploying hyperledger fabric peer to Kubernetes

I am trying to deploy one peer hyperledger fabric network setup to Kubernetes on GCP and while deploying peer I a getting error -
"Cannot run peer because cannot init crypto, missing /var/msp folder"
I tried mounting the msp material but it is not working
This is peer configs -
apiVersion: apps/v1 kind: Deployment metadata: name: peer0 spec:
replicas: 1 selector:
matchLabels:
app: peer0 template:
metadata:
labels:
app: peer0
tier: backend
track: stable
spec:
hostAliases:
- ip: "10.128.0.3"
hostnames:
- "peer0.example.com"
- ip: "10.128.0.3"
hostnames:
- "couchdb0"
- ip: "10.128.0.4"
hostnames:
- "orderer0.orderer.com"
nodeSelector:
id: peer
containers:
- name: peer0
image: "hyperledger/fabric-peer:1.2.0"
ports:
- name: peer0-port
containerPort: 30002
- name: peer0-chaincode
containerPort: 30003
- name: peer0-event
containerPort: 30004
workingDir: /opt/gopath/src/github.com/hyperledger/fabric/peer
command: ["peer"]
args: ["node","start"]
env:
- name: CORE_VM_ENDPOINT
value: "unix:///var/run/docker.sock"
- name: CORE_PEER_ADDRESSAUTODETECT
value: "true"
- name: CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE
value: "bridge"
- name: CORE_PEER_ID
value: "peer0.example.com"
- name: CORE_PEER_ADDRESS
value: "peer0.example.com:30002"
- name: CORE_PEER_GOSSIP_EXTERNALENDPOINT
value: "peer0.example.com:30002"
- name: CORE_PEER_CHAINCODELISTENADDRESS
value: "0.0.0.0:30003"
- name: CORE_PEER_GOSSIP_BOOTSTRAP
value: "0.0.0.0:30002"
- name: CORE_PEER_LISTENADDRESS
value: "0.0.0.0:30002"
- name: CORE_PEER_EVENTS_ADDRESS
value: "0.0.0.0:30004"
- name: CORE_PEER_LOCALMSPID
value: "exampleMSP"
- name: CORE_LOGGING_GOSSIP
value: "INFO"
- name: CORE_LOGGING_PEER_GOSSIP
value: "INFO"
- name: CORE_LOGGING_MSP
value: "INFO"
- name: CORE_LOGGING_POLICIES
value: "DEBUG"
- name: CORE_LOGGING_CAUTHDSL
value: "DEBUG"
- name: CORE_PEER_TLS_ENABLED
value: "false"
- name: CORE_PEER_GOSSIP_USELEADERELECTION
value: "true"
- name: CORE_PEER_GOSSIP_ORGLEADER
value: "false"
- name: CORE_PEER_PROFILE_ENABLED
value: "true"
- name: CORE_LEDGER_STATE_STATEDATABASE
value: "CouchDB"
- name: CORE_PEER_GOSSIP_ORGLEADER
value: "false"
- name: CORE_LEDGER_STATE_COUCHDBCONFIG_COUCHDBADDRESS
value: "couchdb0:30005"
- name: ORDERER_URL
value: "orderer0.orderer.com:30001"
- name: CORE_LEDGER_STATE_COUCHDBCONFIG_USERNAME
value: ""
- name: CORE_LEDGER_STATE_COUCHDBCONFIG_PASSWORD
value: ""
- name: CORE_VM_DOCKER_ATTACHSTDOUT
value: "true"
- name: CORE_PEER_FILESYSTEMPATH
value: "/var/production"
- name: CORE_PEER_MSPCONFIGPATH
#value: "/var/msp"
value: "/var/msp"
volumeMounts:
- name: peer0-volume
mountPath: /var
- name: host
mountPath: /var/run
volumes:
- name: peer0-volume
#persistentVolumeClaim:
# claimName: peer0-pvc
- name: host
hostPath:
path: /var/run
Referencing James comment
"I resolved it , it was happening due to files not getting mount inside the container , I have added separate mount points for that and it worked fine."
It might be helpful to try kubechain from npm.

Kubernetes Helm Chart - Debugging

I'm unable to find good information describing these errors:
[sarah#localhost helm] helm install statefulset --name statefulset --debug
[debug] Created tunnel using local port: '33172'
[debug] SERVER: "localhost:33172"
[debug] Original chart version: ""
[debug] CHART PATH: /home/helm/statefulset/
Error: error validating "": error validating data: [field spec.template for v1beta1.StatefulSetSpec is required, field spec.serviceName for v1beta1.StatefulSetSpec is required, found invalid field containers for v1beta1.StatefulSetSpec]
I'm still new to Helm; I've built two working charts that were similar to this template and didn't have these errors, even though the code isn't much different. I'm thinking there might be some kind of formatting error that I'm not noticing. Either that, or it's due to the different type (the others were Pods, this is StatefulSet).
The YAML file it's referencing is here:
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: "{{.Values.PrimaryName}}"
labels:
name: "{{.Values.PrimaryName}}"
app: "{{.Values.PrimaryName}}"
chart: "{{.Chart.Name}}-{{.Chart.Version}}"
annotations:
"helm.sh/created": {{.Release.Time.Seconds | quote }}
spec:
#serviceAccount: "{{.Values.PrimaryName}}-sa"
containers:
- name: {{.Values.ContainerName}}
image: "{{.Values.PostgresImage}}"
ports:
- containerPort: 5432
protocol: TCP
name: postgres
resources:
requests:
cpu: {{default "100m" .Values.Cpu}}
memory: {{default "100M" .Values.Memory}}
env:
- name: PGHOST
value: /tmp
- name: PG_PRIMARY_USER
value: primaryuser
- name: PG_MODE
value: set
- name: PG_PRIMARY_PORT
value: "5432"
- name: PG_PRIMARY_PASSWORD
value: "{{.Values.PrimaryPassword}}"
- name: PG_USER
value: testuser
- name: PG_PASSWORD
value: "{{.Values.UserPassword}}"
- name: PG_DATABASE
value: userdb
- name: PG_ROOT_PASSWORD
value: "{{.Values.RootPassword}}"
volumeMounts:
- name: pgdata
mountPath: "/pgdata"
readOnly: false
volumes:
- name: pgdata
persistentVolumeClaim:
claimName: {{.Values.PVCName}}
Would someone be able to a) point me in the right direction to find out how to implement the spec.template and spec.serviceName required fields, b) understand why the field 'containers' is invalid, and/or c) give mention of any tool that can help debug Helm charts? I've attempted 'helm lint' and the '--debug' flag but 'helm lint' shows no errors, and the flag output is shown with the errors above.
Is it possible the errors are coming from a different file, also?
StatefulSets objects has different structure than Pods are. You need to modify your yaml file a little:
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: "{{.Values.PrimaryName}}"
labels:
name: "{{.Values.PrimaryName}}"
app: "{{.Values.PrimaryName}}"
chart: "{{.Chart.Name}}-{{.Chart.Version}}"
annotations:
"helm.sh/created": {{.Release.Time.Seconds | quote }}
spec:
selector:
matchLabels:
app: "" # has to match .spec.template.metadata.labels
serviceName: "" # put your serviceName here
replicas: 1 # by default is 1
template:
metadata:
labels:
app: "" # has to match .spec.selector.matchLabels
spec:
terminationGracePeriodSeconds: 10
containers:
- name: {{.Values.ContainerName}}
image: "{{.Values.PostgresImage}}"
ports:
- containerPort: 5432
protocol: TCP
name: postgres
resources:
requests:
cpu: {{default "100m" .Values.Cpu}}
memory: {{default "100M" .Values.Memory}}
env:
- name: PGHOST
value: /tmp
- name: PG_PRIMARY_USER
value: primaryuser
- name: PG_MODE
value: set
- name: PG_PRIMARY_PORT
value: "5432"
- name: PG_PRIMARY_PASSWORD
value: "{{.Values.PrimaryPassword}}"
- name: PG_USER
value: testuser
- name: PG_PASSWORD
value: "{{.Values.UserPassword}}
- name: PG_DATABASE
value: userdb
- name: PG_ROOT_PASSWORD
value: "{{.Values.RootPassword}}"
volumeMounts:
- name: pgdata
mountPath: "/pgdata"
readOnly: false
volumes:
- name: pgdata
persistentVolumeClaim:
claimName: {{.Values.PVCName}}