drone-ci: get status of previous step - kubernetes

I am writing an email plugin with pgp encryption for drone which must react of the status of previous step to send the right email template, but I don't know how I get this information.
I take a look on the environment variables which are passed into my container, but there are no information about the previous step. How react other applications based on events of previous steps?
Here is an excerpt of my drone.yaml.
kind: pipeline
type: kubernetes
name: notification-test
node_selector:
kubernetes.io/os: linux
kubernetes.io/arch: amd64
steps:
- name: exit-code
commands:
- apk update
- apk add bash
- bash -c "env | sort"
- exit 1
image: docker.io/library/alpine:3.16.0
resources:
limits:
cpu: 150
memory: 150M
- name: post-env
commands:
- apk update
- apk add bash
- bash -c "env | sort"
depends_on:
- exit-code
image: docker.io/library/alpine:3.16.0
resources:
limits:
cpu: 150
memory: 150M
- name: drone-email
depends_on:
- post-env
- exit-code
environment:
SMTP_FROM_ADDRESS:
from_secret: smtp_from_address
SMTP_FROM_NAME:
from_secret: smtp_from_name
SMTP_HOST:
from_secret: smtp_host
SMTP_USERNAME:
from_secret: smtp_username
SMTP_PASSWORD:
from_secret: smtp_password
image: docker.io/volkerraschek/drone-email:latest
pull: always
resources:
limits:
cpu: 150
memory: 150M
when:
status:
- changed
- failure
trigger:
event:
exclude:
- tag

Related

Argo Workflow fails with no such directory error when using input parameters

I'm currently doing a PoC to validate usage of Argo Workflow. I created a workflow spec with the following template (this is just a small portion of the workflow yaml):
templates:
- name: dummy-name
inputs:
parameters:
- name: params
container:
name: container-name
image: <image>
volumeMounts:
- name: vault-token
mountPath: "/etc/secrets"
readOnly: true
imagePullPolicy: IfNotPresent
command: ['workflow', 'f10', 'reports', 'expiry', '.', '--days-until-expiry', '30', '--vault-token-file-path', '/etc/secrets/token', '--environment', 'corporate', '--log-level', 'debug']
The above way of passing the commands works without any issues upon submitting the workflow. However, if I replace the command with {{inputs.parameters.params}} like this:
templates:
- name: dummy-name
inputs:
parameters:
- name: params
container:
name: container-name
image: <image>
volumeMounts:
- name: vault-token
mountPath: "/etc/secrets"
readOnly: true
imagePullPolicy: IfNotPresent
command: ['workflow', '{{inputs.parameters.params}}']
it fails with the following error:
DEBU[2023-01-20T18:11:07.220Z] Log line
content="Error: failed to find name in PATH: exec: \"workflow f10 reports expiry . --days-until-expiry 30 --vault-token-file-path /etc/secrets/token --environment corporate --log-level debug\":
stat workflow f10 reports expiry . --days-until-expiry 30 --vault-token-file-path /etc/secrets/token --environment corporate --log-level debug: no such file or directory"
Am I missing something here?
FYI: The Dockerfile that builds the container has the following ENTRYPOINT set:
ENTRYPOINT ["workflow"]

Mount camera to pod get MountVolume.SetUp failed for volume "default-token-c8hm5" : failed to sync secret cache: timed out waiting for the condition

On my Jetson NX, I like to set a yaml file that can mount 2 cameras to pod,
the yaml:
containers:
- name: my-pod
image: my_image:v1.0.0
imagePullPolicy: Always
volumeMounts:
- mountPath: /dev/video0
name: dev-video0
- mountPath: /dev/video1
name: dev-video1
resources:
limits:
nvidia.com/gpu: 1
ports:
- containerPort: 9000
command: [ "/bin/bash"]
args: ["-c", "while true; do echo hello; sleep 10;done"]
securityContext:
privileged: true
volumes:
- hostPath:
path: /dev/video0
type: ""
name: dev-video0
- hostPath:
path: /dev/video1
type: ""
name: dev-video1
but when I deploy it as pod, get the error:
MountVolume.SetUp failed for volume "default-token-c8hm5" : failed to sync secret cache: timed out waiting for the condition
I had tried to remove volumes in yaml, and the pod can be successfully deployed. Any comments on this issue?
Another issue is that when there is a pod got some issues, it will consume the rest of my storage of my Jetson NX, I guess maybe k8s will make lots of temporary files or logs...? when something wrong happening, any solution to this issue, otherwise all od my pods will be evicted...

Using a Docker-Volume with a Mount to a Symlink, But It's Persisting Data to the Host Too. Why?

I created a Docker volume as such:
sudo docker volume create --driver=local --name=es-data1 --opt type=none --opt o=bind --opt device=/usr/local/contoso/data1/elasticsearch/data1
/usr/local/contoso/data1/elasticsearch/data1 is a symlink.
And I'm instantiating three Elasticsearch Docker containers in my docker-compose.yml file as such:
version: '3.7'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.6.0
logging:
driver: none
container_name: elasticsearch1
environment:
- node.name=elasticsearch1
- cluster.name=docker-cluster
- cluster.initial_master_nodes=elasticsearch1
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms1G -Xmx1G"
- http.cors.enabled=true
- http.cors.allow-origin=*
- network.host=_eth0_
ulimits:
nproc: 65535
memlock:
soft: -1
hard: -1
cap_add:
- ALL
# privileged: true
deploy:
replicas: 1
update_config:
parallelism: 1
delay: 10s
resources:
limits:
cpus: '1'
memory: 1G
reservations:
cpus: '1'
memory: 1G
restart_policy:
condition: unless-stopped
delay: 5s
max_attempts: 3
window: 10s
volumes:
- es-logs:/var/log
- es-data1:/usr/share/elasticsearch/data
networks:
- elastic
- ingress
ports:
- 9200:9200
- 9300:9300
healthcheck:
test: wget -q -O - http://127.0.0.1:9200/_cat/health
elasticsearch2:
image: docker.elastic.co/elasticsearch/elasticsearch:7.6.0
logging:
driver: none
container_name: elasticsearch2
environment:
- node.name=elasticsearch2
- cluster.name=docker-cluster
- cluster.initial_master_nodes=elasticsearch1
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms1G -Xmx1G"
- "discovery.zen.ping.unicast.hosts=elasticsearch1"
- http.cors.enabled=true
- http.cors.allow-origin=*
- network.host=_eth0_
ulimits:
nproc: 65535
memlock:
soft: -1
hard: -1
cap_add:
- ALL
# privileged: true
deploy:
replicas: 1
update_config:
parallelism: 1
delay: 10s
resources:
limits:
cpus: '1'
memory: 1G
reservations:
cpus: '1'
memory: 1G
restart_policy:
condition: unless-stopped
delay: 5s
max_attempts: 3
window: 10s
volumes:
- es-logs:/var/log
- es-data2:/usr/share/elasticsearch/data
networks:
- elastic
- ingress
ports:
- 9201:9200
healthcheck:
test: wget -q -O - http://127.0.0.1:9200/_cat/health
elasticsearch3:
image: docker.elastic.co/elasticsearch/elasticsearch:7.6.0
logging:
driver: none
container_name: elasticsearch3
environment:
- node.name=elasticsearch3
- cluster.name=docker-cluster
- cluster.initial_master_nodes=elasticsearch1
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms1G -Xmx1G"
- "discovery.zen.ping.unicast.hosts=elasticsearch1"
- http.cors.enabled=true
- http.cors.allow-origin=*
- network.host=_eth0_
ulimits:
nproc: 65535
memlock:
soft: -1
hard: -1
cap_add:
- ALL
# privileged: true
deploy:
replicas: 1
update_config:
parallelism: 1
delay: 10s
resources:
limits:
cpus: '1'
memory: 1G
reservations:
cpus: '1'
memory: 1G
restart_policy:
condition: unless-stopped
delay: 5s
max_attempts: 3
window: 10s
volumes:
- es-logs:/var/log
- es-data3:/usr/share/elasticsearch/data
networks:
- elastic
- ingress
ports:
- 9202:9200
healthcheck:
test: wget -q -O - http://127.0.0.1:9200/_cat/health
volumes:
es-data1:
driver: local
external: true
es-data2:
driver: local
external: true
es-data3:
driver: local
external: true
es-logs:
driver: local
external: true
networks:
elastic:
external: true
ingress:
external: true
My Problem:
The Elasticsearch containers are persisting index data to both the host filesystem and the mounted symlink.
My Question:
How do I modify my configuration so that the Elasticsearch containers are only persisting index data to the mounted symlink?
It seems to be the default behavior of the local volume driver that the files are additionally stored on the host machine. You can change the volume settings in your docker-compose.yml to prevent the docker from persisting (copying) files on the host file system (see nocopy: true), like so:
version: '3.7'
services:
elasticsearch:
....
volumes:
- type: volume
source: es-data1
target: /usr/share/elasticsearch/data
volume:
nocopy: true
....
volumes:
es-data1:
driver: local
external: true
You may also want to check this question here: Docker-compose - volumes driver local meaning. So, there seem to be some docker volume plugins that are made specifically for the portability reasons; such as flocker or hedvig. But I didn't use a plugin for such purpose, so I can't really recommend one, yet.

How to make the successfully running pods delete themselves within the time set by the user

I am working on a new scheduler's stress test in Kubernetes. I need to open a lot of CPU and memory pods to analyze performance.
I am using image: polinux/stress in my pods.
I would like to ask if there is any instruction, or when I write the yaml file, I can set this successfully generated pod to delete itself within the time set by me.
The following yaml file is the pod I am writing for stress testing. I would like to ask if I can write it from here to let him delete it after a period of time.
apiVersion: v1
kind: Pod
metadata:
name: alltest12
namespace: test
spec:
containers:
- name: alltest
image: polinux/stress
resources:
requests:
memory: "1000Mi"
cpu: "1"
limits:
memory: "1000Mi"
cpu: "1"
command: ["stress"]
args: ["--vm", "1", "--vm-bytes", "500M", "--vm-hang", "1"]
If polinux/stress contains a shell, I believe you can have the thing kill itself:
- containers:
image: polinux/stress
command:
- sh
- -c
- |
sh -c "sleep 300; kill -9 1" &
stress --vm 1 --vm-bytes 500M --vm-hang 1
Or even slightly opposite:
- |
stress --vm etc etc &
child_pid=$!
sleep 300
kill -9 $child_pid
And you can parameterize that setup using env::
env:
- name: LIVE_SECONDS
value: "300"
command:
- sh
- -c
- |
sleep ${LIVE_SECONDS}
kill -9 $child_pid

Docker_Error:-"socket.gaierror: [Errno -3] Temporary failure in name resolution" error comes while run celery on docker image

Docker-compose.yml
version: "3"
services:
web:
# replace username/repo:tag with your name and image details
image: sunilsuthar/sim
deploy:
replicas: 5
resources:
limits:
cpus: "0.1"
memory: 50M
restart_policy:
condition: on-failure
ports:
- "4004:80"
networks:
- webnet
rabbit:
hostname: rabbit
image: sunilsuthar/query_with_rabbitmq
environment:
- RABBITMQ_DEFAULT_USER=rvihzpae
- RABBITMQ_DEFAULT_PASS=Z0AWdEAbJpjvy1btDRYqTq2lDoJcXHv7
links:
- rabbitmq
ports:
- "15672:15672"
- "5672:5672"
tty: true
celery:
image: sunilsuthar/query_with_rabbitmq
command: celery worker -l info -A app.celery
user: nobody
volumes:
- '.:/app'
networks:
webnet:
Check whether your docker container is on the correct network and whether you can ping the server with rabbitmq. In my case firewall settings were reset and local network was unreachable from within the container. Restarting docker daemon resolved the issue.