I am using Argo and have a question about the workflow of workflows example. (https://github.com/argoproj/argo-workflows/blob/master/examples/workflow-of-workflows.yaml)
UPDATED YET AGAIN
As pointed out below, it is a task that I need to view. So my question is now - How do I view the logs from a task?
My workflow completes without error, but does not produce the expected output. I would like to look at the logs of one of the containers within one of the workflows within the overall workflow, but I cannot get the syntax right I am using the following convention to get the logs from the relevant pod.
argo logs -n argo wf-name pod-name
and getting:
workflow-of-workflows-k8fm5-3824346685: 2021-04-05T17:55:43.360917900Z time="2021-04-05T17:55:43.360Z" level=info msg="Starting Workflow Executor" executorType= version="{untagged 2021-04-05T17:09:35Z 79eb50b42e948466f82865b8a79756b57f9b66d9 untagged clean go1.15.7 gc linux/amd64}"
workflow-of-workflows-k8fm5-3824346685: 2021-04-05T17:55:43.362737800Z time="2021-04-05T17:55:43.362Z" level=info msg="Creating a docker executor"
workflow-of-workflows-k8fm5-3824346685: 2021-04-05T17:55:43.362815200Z time="2021-04-05T17:55:43.362Z" level=info msg="Executor (version: untagged, build_date: 2021-04-05T17:09:35Z) initialized (pod: argo/workflow-of-workflows-k8fm5-3824346685) with template:\n{\"name\":\"run\",\"inputs\":{\"parameters\":[{\"name\":\"runTemplate\",\"value\":\"demo1run.yaml\"}]},\"outputs\":{},\"metadata\":{},\"resource\":{\"action\":\"create\",\"manifest\":\"# Example of using a hard-wired artifact location from a HTTP URL.\\napiVersion: argoproj.io/v1alpha1\\nkind: Workflow\\nmetadata:\\n generateName: message-passing-1-\\n namespace: argo\\nspec:\\n serviceAccountName: argo\\n entrypoint: entrypoint\\n\\n templates:\\n\\n - name: echo\\n container:\\n image: weilidma/curl:0.4\\n command:\\n - \\\"/bin/bash\\\"\\n - \\\"-c\\\"\\n args:\\n - \\\"cat /mnt/raw/raw1.json \\u0026\\u0026 exit\\\"\\n volumeMounts:\\n - name: raw-p1-vol\\n mountPath: /mnt/raw\\n - name: log-p1-vol\\n mountPath: /mnt/logs/\\n\\n - name: process1\\n container:\\n image: weilidma/curl:0.4\\n command: \\n - \\\"/bin/bash\\\"\\n - \\\"-c\\\"\\n args: \\n - \\\"jq \\\\u0027[.data[].Platform |= test(\\\\u0022Healy\\\\u0022) | .[][] | select(.Platform == true) | {survey: .Survey, url: .\\\\u0022Data Access\\\\u0022}]\\\\u0027 /mnt/raw/raw1.json \\u003e /mnt/processed/filtered1.json \\u0026\\u0026 exit\\\"\\n volumeMounts:\\n - name: raw-p1-vol\\n mountPath: /mnt/raw/\\n - name: processed-p1-vol\\n mountPath: /mnt/processed/\\n - name: log-p1-vol\\n mountPath: /mnt/logs/\\n\\n - name: process2\\n container:\\n image: weilidma/curl:0.4\\n command: \\n - \\\"/bin/bash\\\"\\n - \\\"-c\\\"\\n args: \\n - \\\"jq \\\\u0027[.data[].Platform |= test(\\\\u0022Healy\\\\u0022) | .[][] | select(.Platform == true) | {survey: .Survey, url: .\\\\u0022Data Access\\\\u0022}]\\\\u0027 /mnt/raw/raw2.json \\u003e /mnt/processed/filtered2.json \\u0026\\u0026 exit\\\"\\n volumeMounts:\\n - name: raw-p2-vol\\n mountPath: /mnt/raw/\\n - name: processed-p2-vol\\n mountPath: /mnt/processed/\\n - name: log-p2-vol\\n mountPath: /mnt/logs/\\n\\n - name: join\\n container:\\n image: weilidma/curl:0.4\\n command: \\n - \\\"/bin/bash\\\"\\n - \\\"-c\\\"\\n args: \\n - \\\"jq -n --slurpfile f1 /mnt/processed1/filtered1.json --slurpfile f2 /mnt/processed2/filtered2.json -f .jq/join.jq --arg field \\\\u0022survey\\\\u0022 \\u003e /mnt/processed1/output.json \\u0026\\u0026 exit\\\"\\n volumeMounts:\\n - name: processed-p1-vol\\n mountPath: /mnt/processed1/\\n - name: processed-p2-vol\\n mountPath: /mnt/processed2/\\n - name: log-p1-vol\\n mountPath: /mnt/logs1/\\n - name: log-p2-vol\\n mountPath: /mnt/logs2/\\n\\n - name: egress\\n inputs:\\n parameters:\\n - name: ipaddr\\n container:\\n image: weilidma/curl:0.4\\n command: \\n - \\\"/bin/bash\\\"\\n - \\\"-c\\\"\\n args: \\n - \\\"cat /mnt/processed/output.json \\u0026\\u0026 exit\\\"\\n volumeMounts:\\n - name: processed-p1-vol\\n mountPath: /mnt/processed/\\n - name: log-p1-vol\\n mountPath: /mnt/logs/\\n\\n - dag:\\n tasks:\\n - name: echo\\n template: echo\\n dependencies:\\n - name: p1\\n template: process1\\n dependencies:\\n\\n - name: p2\\n template: process2\\n dependencies:\\n\\n - name: j\\n template: join\\n dependencies:\\n - p1\\n - p2\\n\\n - name: e\\n template: egress\\n arguments:\\n parameters: \\n - name: ipaddr \\n value: 'https://192.241.129.100'\\n dependencies:\\n - j\\n\\n name: entrypoint\\n\"}}"
workflow-of-workflows-k8fm5-3824346685: 2021-04-05T17:55:43.362847900Z time="2021-04-05T17:55:43.362Z" level=info msg="Loading manifest to /tmp/manifest.yaml"
workflow-of-workflows-k8fm5-3824346685: 2021-04-05T17:55:43.362942100Z time="2021-04-05T17:55:43.362Z" level=info msg="kubectl create -f /tmp/manifest.yaml -o json"
workflow-of-workflows-k8fm5-3824346685: 2021-04-05T17:55:43.837625500Z time="2021-04-05T17:55:43.837Z" level=info msg="Resource: argo/Workflow.argoproj.io/message-passing-1-t8749. SelfLink: /apis/argoproj.io/v1alpha1/namespaces/argo/workflows/message-passing-1-t8749"
workflow-of-workflows-k8fm5-3824346685: 2021-04-05T17:55:43.837636900Z time="2021-04-05T17:55:43.837Z" level=info msg="Starting SIGUSR2 signal monitor"
workflow-of-workflows-k8fm5-3824346685: 2021-04-05T17:55:43.837696900Z time="2021-04-05T17:55:43.837Z" level=info msg="No output parameters"
Based on this output, the container name seems to be argo/Workflow.argoproj.io/message-passing-1-t8749 but when I add that to the end I get an error. Here are the commands I have tried:
argo logs -n argo workflow-of-workflows-k8fm5 workflow-of-workflows-k8fm5-3824346685 -c argo/Workflow.argoproj.io/message-passing-1-t8749
or
argo logs -n argo workflow-of-workflows-k8fm5 workflow-of-workflows-k8fm5-3824346685 -c message-passing-1-t8749
Thanks to Alex of ArgoProj!
Here is a command I did not know:
kubectl get workflow
will list (surprise) workflows! From there, I could see the individual workflows embedded into the larger workflow.
The default container names on an Argo Workflows pod are init, main, and wait.
I'm not sure what message-passing-1-t8749 refers to, but it might be the "step/task name."
Related
Here's the Jenkinsfile, I'm spinning up:
pipeline {
agent {
kubernetes {
yaml '''
apiVersion: v1
kind: Pod
metadata:
name: kaniko
namespace: jenkins
spec:
containers:
- name: kaniko
image: gcr.io/kaniko-project/executor:v1.8.1-debug
imagePullPolicy: IfNotPresent
command:
- /busybox/cat
tty: true
volumeMounts:
- name: jenkins-docker-cfg
mountPath: /kaniko/.docker
- name: image-cache
mountPath: /cache
imagePullSecrets:
- name: regcred
volumes:
- name: image-cache
persistentVolumeClaim:
claimName: kaniko-cache-pvc
- name: jenkins-docker-cfg
projected:
sources:
- secret:
name: regcred
items:
- key: .dockerconfigjson
path: config.json
'''
}
}
stages {
stage('Build & Cache Image'){
steps{
container(name: 'kaniko', shell: '/busybox/sh') {
withEnv(['PATH+EXTRA=/busybox']) {
sh '''#!/busybox/sh -xe
/kaniko/executor \
--cache \
--cache-dir=/cache \
--dockerfile Dockerfile \
--context `pwd`/Dockerfile \
--insecure \
--skip-tls-verify \
--destination testrepo/kaniko-test:0.0.1'''
}
}
}
}
}
}
Problem is the executor doesn't dump the cache anywhere I can find. If I rerun the pod and stage, the executor logs say that there's no cache. I want to retain the cache using a PVC as you can see. Any thoughts? Do I miss something?
Thanks in advance.
You should use separate pod kaniko-warmer, which will download you specific images.
- name: kaniko-warmer
image: gcr.io/kaniko-project/warmer:latest
args: ["--cache-dir=/cache",
"--image=nginx:1.17.1-alpine",
"--image=node:17"]
volumeMounts:
- name: kaniko-cache
mountPath: /cache
volumes:
- name: kaniko-cache
hostPath:
path: /opt/volumes/database/qazexam-front-cache
type: DirectoryOrCreate
Then volume kaniko-cache could be mounted to kaniko executor
I recently upgraded the Airflow from 1.10.11 to 2.2.3 after following the steps given in https://airflow.apache.org/docs/apache-airflow/stable/upgrading-from-1-10/index.html. I first up upgraded to 1.10.15 as suggested which worked fine. But after upgrading to 2.2.3, I'm unable to execute the DAGs from UI as the task is going into queued state. When I check the task pod logs, I see this error:
[2022-02-22 06:46:23,886] {cli_action_loggers.py:105} WARNING - Failed to log action with (sqlite3.OperationalError) no such table: log
[SQL: INSERT INTO log (dttm, dag_id, task_id, event, execution_date, owner, extra) VALUES (?, ?, ?, ?, ?, ?, ?)]
[parameters: ('2022-02-22 06:46:23.880923', 'dag id', 'task id', 'cli_task_run', None, 'airflow', '{"host_name": "pod name", "full_command": "[\'/home/airflow/.local/bin/airflow\', \'tasks\', \ task id\', \'manual__2022-02-22T06:45:47.840912+00:00\', \'--local\', \'--subdir\', \'DAGS_FOLDER/dag_file.py\']"}')]
(Background on this error at: http://sqlalche.me/e/13/e3q8)
[2022-02-22 06:46:23,888] {dagbag.py:500} INFO - Filling up the DagBag from /opt/airflow/dags/repo/xxxxx.py
Traceback (most recent call last):
File "/home/airflow/.local/bin/airflow", line 8, in <module>
sys.exit(main())
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/__main__.py", line 48, in main
args.func(args)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/cli/cli_parser.py", line 48, in command
return func(*args, **kwargs)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/utils/cli.py", line 92, in wrapper
return f(*args, **kwargs)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/cli/commands/task_command.py", line 282, in task_run
dag = get_dag(args.subdir, args.dag_id)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/utils/cli.py", line 193, in get_dag
f"Dag {dag_id!r} could not be found; either it does not exist or it failed to parse."
airflow.exceptions.AirflowException: Dag 'xxxxx' could not be found; either it does not exist or it failed to parse
I did try exec into the webserver and scheduler using "kubectl exec -it airflow-dev-webserver-6c5755d5dd-262wd -n dev --container webserver -- /bin/sh". I could see all the dags under /opt/airflow/dags/repo/. Even in the error it says Filling up the DagBag from /opt/airflow/dags/repo/ but couldn't understand what's making the task execution to go into queued state.
I figured out the issue using below steps:
I triggered a DAG after which I could see a task pod going into error state. So I did "kubectl logs {pod_name} git-sync" to check whether the DAGs are being copied in the first place or not. Then I found this below error:
image
Then I realized that it is the problem with permissions for writing the DAGs to the DAGs folder. For this I tried changing the "readOnly: false" under "volumeMounts" section.
image
That's it!!! It worked. Below worked finally:
Pod Template File:
apiVersion: v1
kind: Pod
metadata:
labels:
component: worker
release: airflow-dev
tier: airflow
spec:
containers:
- args: []
command: []
env:
- name: AIRFLOW__KUBERNETES__WORKER_CONTAINER_REPOSITORY
value: ECR repo link
- name: AIRFLOW__SMTP__SMTP_PORT
value: '587'
- name: AIRFLOW__KUBERNETES__WORKER_CONTAINER_TAG
value: docker image tag
- name: AIRFLOW__KUBERNETES__GIT_SYNC_RUN_AS_USER
value: '65533'
- name: AIRFLOW__CORE__ENABLE_XCOM_PICKLING
value: 'True'
- name: AIRFLOW__KUBERNETES__LOGS_VOLUME_CLAIM
value: dw-airflow-dev-logs
- name: AIRFLOW__KUBERNETES__RUN_AS_USER
value: '50000'
- name: AIRFLOW__KUBERNETES__DAGS_IN_IMAGE
value: 'False'
- name: AIRFLOW__SCHEDULER__SCHEDULE_AFTER_TASK_EXECUTION
value: 'False'
- name: AIRFLOW__SMTP__SMTP_MAIL_FROM
value: email id
- name: AIRFLOW__CORE__LOAD_EXAMPLES
value: 'False'
- name: AIRFLOW__SMTP__SMTP_PASSWORD
value: xxxxxxxxx
- name: AIRFLOW__SMTP__SMTP_HOST
value: smtp-relay.gmail.com
- name: AIRFLOW__KUBERNETES__NAMESPACE
value: dev
- name: AIRFLOW__SMTP__SMTP_USER
value: xxxxxxxxxx
- name: AIRFLOW__CORE__EXECUTOR
value: LocalExecutor
- name: AIRFLOW_HOME
value: /opt/airflow
- name: AIRFLOW__CORE__DAGS_FOLDER
value: /opt/airflow/dags
- name: AIRFLOW__KUBERNETES__GIT_DAGS_FOLDER_MOUNT_POINT
value: /opt/airflow/dags
- name: AIRFLOW__KUBERNETES__FS_GROUP
value: "50000"
- name: AIRFLOW__CORE__SQL_ALCHEMY_CONN
valueFrom:
secretKeyRef:
key: connection
name: airflow-dev-airflow-metadata
- name: AIRFLOW_CONN_AIRFLOW_DB
valueFrom:
secretKeyRef:
key: connection
name: airflow-dev-airflow-metadata
- name: AIRFLOW__CORE__FERNET_KEY
valueFrom:
secretKeyRef:
key: fernet-key
name: airflow-dev-fernet-key
envFrom: []
image: docker image
imagePullPolicy: IfNotPresent
name: base
ports: []
volumeMounts:
- mountPath: /opt/airflow/dags
name: airflow-dags
readOnly: false
subPath: /repo
- mountPath: /opt/airflow/logs
name: airflow-logs
- mountPath: /etc/git-secret/ssh
name: git-sync-ssh-key
subPath: ssh
- mountPath: /opt/airflow/airflow.cfg
name: airflow-config
readOnly: true
subPath: airflow.cfg
- mountPath: /opt/airflow/config/airflow_local_settings.py
name: airflow-config
readOnly: true
subPath: airflow_local_settings.py
hostNetwork: false
imagePullSecrets:
- name: airflow-dev-registry
initContainers:
- env:
- name: GIT_SYNC_REPO
value: xxxxxxxxxxxxx
- name: GIT_SYNC_BRANCH
value: master
- name: GIT_SYNC_ROOT
value: /git
- name: GIT_SYNC_DEST
value: repo
- name: GIT_SYNC_DEPTH
value: '1'
- name: GIT_SYNC_ONE_TIME
value: 'true'
- name: GIT_SYNC_REV
value: HEAD
- name: GIT_SSH_KEY_FILE
value: /etc/git-secret/ssh
- name: GIT_SYNC_ADD_USER
value: 'true'
- name: GIT_SYNC_SSH
value: 'true'
- name: GIT_KNOWN_HOSTS
value: 'false'
image: k8s.gcr.io/git-sync:v3.1.6
name: git-sync
securityContext:
runAsUser: 65533
volumeMounts:
- mountPath: /git
name: airflow-dags
readOnly: false
- mountPath: /etc/git-secret/ssh
name: git-sync-ssh-key
subPath: ssh
nodeSelector: {}
restartPolicy: Never
securityContext:
fsGroup: 50000
runAsUser: 50000
serviceAccountName: airflow-dev-worker-serviceaccount
volumes:
- emptyDir: {}
name: airflow-dags
- name: airflow-logs
persistentVolumeClaim:
claimName: dw-airflow-dev-logs
- name: git-sync-ssh-key
secret:
items:
- key: gitSshKey
mode: `444`
path: ssh
secretName: airflow-private-dags-dev
- configMap:
name: airflow-dev-airflow-config
name: airflow-config [](url)
Trying to use the kustomize to patch a Kubernetes resource. However, the order/sequence of the initContainers list is different in the output.
For example, the input is
apiVersion: v1
kind: Pod
metadata:
name: myapp-pod
labels:
app: myapp
spec:
containers:
- name: myapp-container
image: busybox:1.28
command: ['sh', '-c', 'echo The app is running! && sleep 3600']
initContainers:
- name: init-mydb
image: busybox:1.28
command: ['sh', '-c', "sleep 3600"]
- name: init-myservice
image: busybox:1.28
command: ['sh', '-c', "sleep 7200"]
after the patch, the output become
apiVersion: v1
kind: Pod
metadata:
labels:
app: myapp
name: myapp-pod
spec:
containers:
- command:
- sh
- -c
- echo The app is running! && sleep 3600
image: busybox:1.28
name: myapp-container
initContainers:
- command:
- sh
- -c
- sleep 7200
env:
- name: HTTP_ADDR
value: https://[$(HOST_IP)]:8501
image: busybox:1.28
name: init-myservice
- command:
- sh
- -c
- sleep 3600
env:
- name: HTTP_ADDR
value: https://[$(HOST_IP)]:8501
image: busybox:1.28
name: init-mydb
Have tried with the --reorder argument but doesn't help.
Version tested:
{Version:kustomize/v4.1.3 GitCommit:0f614e92f72f1b938a9171b964d90b197ca8fb68 BuildDate:2021-05-20T20:52:40Z GoOs:linux GoArch:amd64}
kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- source.yaml
patches:
- path: ./pod-patch.yaml
target:
kind: Pod
name: ".*"
pod-patch.yaml
apiVersion: apps/v1
kind: Pod
metadata:
name: doesNotMatter
spec:
initContainers:
- name: init-myservice
env:
- name: HTTP_ADDR
value: https://[$(HOST_IP)]:8501
- name: init-mydb
env:
- name: HTTP_ADDR
value: https://[$(HOST_IP)]:8501
This is a non-issue. The order is different because you've inverted it in your pod-patch.yaml.
In source.yaml, the order of the initContainers is [init-mydb, init-myservice]. In pod-patch.yaml it's [init-myservice, init-mydb].
I created a pod following a RedHat blog post and created a subsequent pod using the YAML file
Post: https://www.redhat.com/sysadmin/compose-podman-pods
When creating the pod using the commands, the pod works fine (can access localhost:8080)
When creating the pod using the YAML file, I get error 403 forbidden
I have tried this on two different hosts (both creating pod from scratch and using YAML), deleting all images and pod each time to make sure nothing was influencing the process
I'm using podman 2.0.4 on Ubuntu 20.04
Commands:
podman create --name wptestpod -p 8080:80
podman run \
-d --restart=always --pod=wptestpod \
-e MYSQL_ROOT_PASSWORD="myrootpass" \
-e MYSQL_DATABASE="wp" \
-e MYSQL_USER="wordpress" \
-e MYSQL_PASSWORD="w0rdpr3ss" \
--name=wptest-db mariadb
podman run \
-d --restart=always --pod=wptestpod \
-e WORDPRESS_DB_NAME="wp" \
-e WORDPRESS_DB_USER="wordpress" \
-e WORDPRESS_DB_PASSWORD="w0rdpr3ss" \
-e WORDPRESS_DB_HOST="127.0.0.1" \
--name wptest-web wordpress
Original YAML file from podman generate kube wptestpod > wptestpod.yaml:
# Generation of Kubernetes YAML is still under development!
#
# Save the output of this file and use kubectl create -f to import
# it into Kubernetes.
#
# Created with podman-2.0.4
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: '2020-08-26T17:02:56Z'
labels:
app: wptestpod
name: wptestpod
spec:
containers:
- command:
- apache2-foreground
env:
- name: PATH
value: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
- name: TERM
value: xterm
- name: container
value: podman
- name: WORDPRESS_DB_NAME
value: wp
- name: WORDPRESS_DB_USER
value: wordpress
- name: APACHE_CONFDIR
value: /etc/apache2
- name: PHP_LDFLAGS
value: -Wl,-O1 -pie
- name: PHP_VERSION
value: 7.4.9
- name: PHP_EXTRA_CONFIGURE_ARGS
value: --with-apxs2 --disable-cgi
- name: GPG_KEYS
value: 42670A7FE4D0441C8E4632349E4FDC074A4EF02D 5A52880781F755608BF815FC910DEB46F53EA312
- name: WORDPRESS_DB_PASSWORD
value: t3stp4ssw0rd
- name: APACHE_ENVVARS
value: /etc/apache2/envvars
- name: PHP_ASC_URL
value: https://www.php.net/distributions/php-7.4.9.tar.xz.asc
- name: PHP_SHA256
value: 23733f4a608ad1bebdcecf0138ebc5fd57cf20d6e0915f98a9444c3f747dc57b
- name: PHP_URL
value: https://www.php.net/distributions/php-7.4.9.tar.xz
- name: WORDPRESS_DB_HOST
value: 127.0.0.1
- name: PHP_CPPFLAGS
value: -fstack-protector-strong -fpic -fpie -O2 -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64
- name: PHP_MD5
- name: PHP_EXTRA_BUILD_DEPS
value: apache2-dev
- name: PHP_CFLAGS
value: -fstack-protector-strong -fpic -fpie -O2 -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64
- name: WORDPRESS_SHA1
value: 03fe1a139b3cd987cc588ba95fab2460cba2a89e
- name: PHPIZE_DEPS
value: "autoconf \t\tdpkg-dev \t\tfile \t\tg++ \t\tgcc \t\tlibc-dev \t\tmake \t\tpkg-config \t\tre2c"
- name: WORDPRESS_VERSION
value: '5.5'
- name: PHP_INI_DIR
value: /usr/local/etc/php
- name: HOSTNAME
value: wptestpod
image: docker.io/library/wordpress:latest
name: wptest-web
ports:
- containerPort: 80
hostPort: 8080
protocol: TCP
resources: {}
securityContext:
allowPrivilegeEscalation: true
capabilities: {}
privileged: false
readOnlyRootFilesystem: false
seLinuxOptions: {}
workingDir: /var/www/html
- command:
- mysqld
env:
- name: PATH
value: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
- name: TERM
value: xterm
- name: container
value: podman
- name: MYSQL_PASSWORD
value: t3stp4ssw0rd
- name: GOSU_VERSION
value: '1.12'
- name: GPG_KEYS
value: 177F4010FE56CA3336300305F1656F24C74CD1D8
- name: MARIADB_MAJOR
value: '10.5'
- name: MYSQL_ROOT_PASSWORD
value: t3stp4ssw0rd
- name: MARIADB_VERSION
value: 1:10.5.5+maria~focal
- name: MYSQL_DATABASE
value: wp
- name: MYSQL_USER
value: wordpress
- name: HOSTNAME
value: wptestpod
image: docker.io/library/mariadb:latest
name: wptest-db
resources: {}
securityContext:
allowPrivilegeEscalation: true
capabilities: {}
privileged: false
readOnlyRootFilesystem: false
seLinuxOptions: {}
workingDir: /
status: {}
---
metadata:
creationTimestamp: null
spec: {}
status:
loadBalancer: {}
YAML file with certain envs removed (taken from blog post):
# Generation of Kubernetes YAML is still under development!
#
# Save the output of this file and use kubectl create -f to import
# it into Kubernetes.
#
# Created with podman-1.9.3
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: "2020-07-01T20:17:42Z"
labels:
app: wptestpod
name: wptestpod
spec:
containers:
- name: wptest-web
env:
- name: WORDPRESS_DB_NAME
value: wp
- name: WORDPRESS_DB_HOST
value: 127.0.0.1
- name: WORDPRESS_DB_USER
value: wordpress
- name: WORDPRESS_DB_PASSWORD
value: w0rdpr3ss
image: docker.io/library/wordpress:latest
ports:
- containerPort: 80
hostPort: 8080
protocol: TCP
resources: {}
securityContext:
allowPrivilegeEscalation: true
capabilities: {}
privileged: false
readOnlyRootFilesystem: false
seLinuxOptions: {}
workingDir: /var/www/html
- name: wptest-db
env:
- name: MYSQL_ROOT_PASSWORD
value: myrootpass
- name: MYSQL_USER
value: wordpress
- name: MYSQL_PASSWORD
value: w0rdpr3ss
- name: MYSQL_DATABASE
value: wp
image: docker.io/library/mariadb:latest
resources: {}
securityContext:
allowPrivilegeEscalation: true
capabilities: {}
privileged: false
readOnlyRootFilesystem: false
seLinuxOptions: {}
workingDir: /
status: {}
Can anyone see why this pod would not work when created using the YAML file, but works fine when created using the commands? It seems like a good workflow, but it's useless if the pods produced with the YAML are non-functional.
I found the same article, and the same problem than you. None of the following tests worked for me:
Add and remove environment variables
Add and remove restartPolicy part
Play with the capabilities part
As soon as you move back the command part, everything fires up again.
Check it with the following wordpress.yaml:
# Generation of Kubernetes YAML is still under development!
#
# Save the output of this file and use kubectl create -f to import
# it into Kubernetes.
#
# Created with podman-2.2.1
apiVersion: v1
kind: Pod
metadata:
labels:
app: wordpress-pod
name: wordpress-pod
spec:
containers:
- command:
- apache2-foreground
name: wptest-web
env:
- name: WORDPRESS_DB_NAME
value: wp
- name: WORDPRESS_DB_HOST
value: 127.0.0.1
- name: WORDPRESS_DB_USER
value: wordpress
- name: WORDPRESS_DB_PASSWORD
value: w0rdpr3ss
image: docker.io/library/wordpress:latest
ports:
- containerPort: 80
hostPort: 8080
protocol: TCP
resources: {}
securityContext:
allowPrivilegeEscalation: true
capabilities: {}
privileged: false
readOnlyRootFilesystem: false
seLinuxOptions: {}
workingDir: /var/www/html
- command:
- mysqld
name: wptest-db
env:
- name: MYSQL_ROOT_PASSWORD
value: myrootpass
- name: MYSQL_USER
value: wordpress
- name: MYSQL_PASSWORD
value: w0rdpr3ss
- name: MYSQL_DATABASE
value: wp
image: docker.io/library/mariadb:latest
resources: {}
securityContext:
allowPrivilegeEscalation: true
capabilities: {}
privileged: false
readOnlyRootFilesystem: false
seLinuxOptions: {}
workingDir: /
status: {}
Play & checks:
# Create containers, pod and run everything
$ podman play kube wordpress.yaml
# Output
Pod:
5a211c35419b4fcf0deda718e47eec2dd10653a5c5bacc275c312ae75326e746
Containers:
bfd087b5649f8d1b3c62ef86f28f4bcce880653881bcda21823c09e0cca1c85b
5aceb11500db0a91b4db2cc4145879764e16ed0e8f95a2f85d9a55672f65c34b
# Check running state
$ podman container ls; podman pod ls
# Output
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5aceb11500db docker.io/library/mariadb:latest mysqld 13 seconds ago Up 10 seconds ago 0.0.0.0:8080->80/tcp wordpress-pod-wptest-db
bfd087b5649f docker.io/library/wordpress:latest apache2-foregroun... 16 seconds ago Up 10 seconds ago 0.0.0.0:8080->80/tcp wordpress-pod-wptest-web
d8bf33eede43 k8s.gcr.io/pause:3.2 19 seconds ago Up 11 seconds ago 0.0.0.0:8080->80/tcp 5a211c35419b-infra
POD ID NAME STATUS CREATED INFRA ID # OF CONTAINERS
5a211c35419b wordpress-pod Running 20 seconds ago d8bf33eede43 3
A bit more explanation about the bug:
The problem is that entrypoint and cmd are not parsed correctly from the images, as it should and you would expect. It was working on previous versions, and it is already identified and fixed for the future ones.
For complete reference:
Comment found at podman#8710-comment.748672710 breaks this problem into two pieces:
"make podman play use ENVs from image" (podman#8654 already fixed in mainstream)
"podman play should honour both ENTRYPOINT and CMD from image" (podman#8666)
This one is replaced by "play kube: fix args/command handling" (podman#8807 the one already merged to mainstream)
I want to set up a pod and there are two containers running inside the pod, which try to access a mounted file /var/run/udspath.
In container serviceC, I need to change the file and group owner of /var/run/udspath, so I add a command into the yaml file. But it does not work.
kubectl apply does not complain, but container serviceC is not created.
Without this "command: ['/bin/sh', '-c', 'sudo chown 1337:1337 /var/run/udspath']", the container could be created.
apiVersion: v1
kind: Service
metadata:
name: clitool
labels:
app: httpbin
spec:
ports:
- name: http
port: 8000
selector:
app: httpbin
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
creationTimestamp: null
name: clitool
spec:
replicas: 1
strategy: {}
template:
metadata:
annotations:
sidecar.istio.io/status: '{"version":"1c09c07e5751560367349d807c164267eaf5aea4018b4588d884f7d265cf14a4","initContainers":["istio-init"],"containers":["serviceC"],"volumes":["istio-envoy","istio-certs"],"imagePullSecrets":null}'
creationTimestamp: null
labels:
app: httpbin
version: v1
spec:
containers:
- image:
name: serviceA
imagePullPolicy: IfNotPresent
volumeMounts:
- mountPath: /var/run/udspath
name: sdsudspath
- image:
imagePullPolicy: IfNotPresent
name: serviceB
ports:
- containerPort: 8000
resources: {}
- args:
- proxy
- sidecar
- --configPath
- /etc/istio/proxy
- --binaryPath
- /usr/local/bin/envoy
- --serviceCluster
- httpbin
- --drainDuration
- 45s
- --parentShutdownDuration
- 1m0s
- --discoveryAddress
- istio-pilot.istio-system:15007
- --discoveryRefreshDelay
- 1s
- --zipkinAddress
- zipkin.istio-system:9411
- --connectTimeout
- 10s
- --statsdUdpAddress
- istio-statsd-prom-bridge.istio-system:9125
- --proxyAdminPort
- "15000"
- --controlPlaneAuthPolicy
- NONE
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: INSTANCE_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: ISTIO_META_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: ISTIO_META_INTERCEPTION_MODE
value: REDIRECT
image:
imagePullPolicy: IfNotPresent
command: ["/bin/sh"]
args: ["-c", "sudo chown 1337:1337 /var/run/udspath"]
name: serviceC
resources:
requests:
cpu: 10m
securityContext:
privileged: false
readOnlyRootFilesystem: true
runAsUser: 1337
volumeMounts:
- mountPath: /etc/istio/proxy
name: istio-envoy
- mountPath: /etc/certs/
name: istio-certs
readOnly: true
- mountPath: /var/run/udspath
name: sdsudspath
initContainers:
- args:
- -p
- "15001"
- -u
- "1337"
- -m
- REDIRECT
- -i
- '*'
- -x
- ""
- -b
- 8000,
- -d
- ""
image: docker.io/quanlin/proxy_init:180712-1038
imagePullPolicy: IfNotPresent
name: istio-init
resources: {}
securityContext:
capabilities:
add:
- NET_ADMIN
privileged: true
volumes:
- name: sdsudspath
hostPath:
path: /var/run/udspath
- emptyDir:
medium: Memory
name: istio-envoy
- name: istio-certs
secret:
optional: true
secretName: istio.default
status: {}
---
kubectl describe pod xxx shows that
serviceC:
Container ID:
Image:
Image ID:
Port: <none>
Command:
/bin/sh
Args:
-c
sudo chown 1337:1337 /var/run/udspath
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Mon, 30 Jul 2018 10:30:04 -0700
Finished: Mon, 30 Jul 2018 10:30:04 -0700
Ready: False
Restart Count: 2
Requests:
cpu: 10m
Environment:
POD_NAME: clitool-5d548b856-6v9p9 (v1:metadata.name)
POD_NAMESPACE: default (v1:metadata.namespace)
INSTANCE_IP: (v1:status.podIP)
ISTIO_META_POD_NAME: clitool-5d548b856-6v9p9 (v1:metadata.name)
ISTIO_META_INTERCEPTION_MODE: REDIRECT
Mounts:
/etc/certs/ from certs (ro)
/etc/istio/proxy from envoy (rw)
/var/run/udspath from sdsudspath (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-g2zzv (ro)
More information would be helpful. Like what error are you getting.
Nevertheless, it really depends on what is defined in ServiceC's dockerfile entrypoint or cmd.
Mapping between docker and kubernetes:
Docker Entrypoint --> Pod command (The command run by the container)
Docker cmd --> Pod args (The arguments passed to the command)
https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/