I'm testing Argo workflow on Minikube and I'm using Minio to upload/download data created within the workflow. And when I submit the template yaml, I got failed to save outputs error on the pod.
I checked the logs using kubectl logs -n air [POD NAME] -c wait, the result is below.
time="2019-04-24T04:25:27Z" level=info msg="Creating a docker executor"
time="2019-04-24T04:25:27Z" level=info msg="Executor (version: v2.2.1, build_date: 2018-10-11T16:27:29Z) and goes on and on
time="2019-04-24T04:25:27Z" level=info msg="Waiting on main container"
time="2019-04-24T04:25:29Z" level=info msg="main container started with container ID: 86afd5f5a35fbea3fcd65fdf565f8194d79535034d94548bb371681faf549e6e"
time="2019-04-24T04:25:29Z" level=info msg="Starting annotations monitor"
time="2019-04-24T04:25:29Z" level=info msg="docker wait 86afd5f5a35fbea3fcd65fdf565f8194d79535034d94548bb371681faf549e6e"
time="2019-04-24T04:25:29Z" level=info msg="Starting deadline monitor"
time="2019-04-24T04:25:33Z" level=info msg="Main container completed"
time="2019-04-24T04:25:33Z" level=info msg="No sidecars"
time="2019-04-24T04:25:33Z" level=info msg="Saving output artifacts"
time="2019-04-24T04:25:33Z" level=info msg="Saving artifact: get-data"
time="2019-04-24T04:25:33Z" level=info msg="Archiving 86afd5f5a35fbea3fcd65fdf565f8194d79535034d94548bb371681faf549e6e:/data/ to /argo/outputs/artifacts/get-data.tgz"
time="2019-04-24T04:25:33Z" level=info msg="sh -c docker cp -a 86afd5f5a35fbea3fcd65fdf565f8194d79535034d94548bb371681faf549e6e:/data/ - | gzip > /argo/outputs/artifacts/get-data.tgz"
time="2019-04-24T04:25:33Z" level=info msg="Annotations monitor stopped"
time="2019-04-24T04:25:34Z" level=info msg="Archiving completed"
time="2019-04-24T04:25:34Z" level=info msg="Creating minio client 192.168.99.112:31774 using IAM role"
time="2019-04-24T04:25:34Z" level=info msg="Saving from /argo/outputs/artifacts/get-data.tgz to s3 (endpoint: 192.168.99.112:31774, bucket: reseach-bucket, key: /data/)"
time="2019-04-24T04:25:34Z" level=info msg="Deadline monitor stopped"
time="2019-04-24T04:26:04Z" level=info msg="Alloc=3827 TotalAlloc=11256 Sys=9830 NumGC=4 Goroutines=7"
time="2019-04-24T04:26:04Z" level=fatal msg="Get http://169.254.169.254/latest/meta-data/iam/security-credentials: dial tcp 169.254.169.254:80: i/o and goes on and on
And the template yaml file looks like this:
apiVersion: argoproj.io/v1alpha1
kind: Workflow
...
########################################
- name: template-data-handling
activeDeadlineSeconds: 10800
outputs:
artifacts:
- name: get-data
path: /data/
s3:
endpoint: 192.168.99.112:31774
bucket: reseach-bucket
key: /data/
secretKeySecret:
name: minio-credentials
key: accesskey
secretKeySecret:
name: minio-credentials
key: secretkey
retryStrategy:
limit: 1
container:
image: demo-pipeline
imagePullPolicy: Never
command: [/bin/sh, -c]
args:
- |
python test.py
Could someone help?
do you create minio-credentials secret which has secretkey and accesskey on the namespace where the workflow is running?
Example:
Argo controller pod is running on argo namespace. workflow template is submitting in default namespace. minio-credentials secret should be available in default namespace.
Related
I'm running an Argo workflow on a local MinIO K8s cluster. I'm setting up an Artifact Repository on MinIO where output artifacts from my workflow can be stored. I followed the instructions here https://argoproj.github.io/argo-workflows/configure-artifact-repository/#configuring-minio .
The error I'm running into is: failed to create new S3 client: Endpoint url cannot have fully qualified paths.
My MinIO endpoint is at http://127.0.0.1:52139.
Here is my workflow YAML file:
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
generateName: artifactory-repository-ref-
spec:
archiveLogs: true
entrypoint: main
templates:
- name: main
container:
image: docker/whalesay:latest
command: [ sh, -c ]
args: [ "cowsay hello world | tee /tmp/hello_world.txt" ]
archiveLocation:
archiveLogs: true
outputs:
artifacts:
- name: hello_world
path: /tmp/hello_world.txt
Here is my workflow-controller-configmap YAML which is deployed in the same namespace as the workflow:
# This file describes the config settings available in the workflow controller configmap
apiVersion: v1
kind: ConfigMap
metadata:
name: workflow-controller-configmap
data: # "config: |" key is optional in 2.7+!
artifactRepository: | # However, all nested maps must be strings
archiveLogs: true
s3:
endpoint: argo-artifacts:9000
bucket: my-bucket
insecure: true
accessKeySecret: #omit if accessing via AWS IAM
name: my-minio-cred
key: accessKey
secretKeySecret: #omit if accessing via AWS IAM
name: my-minio-cred
key: secretKey
useSDKCreds: true
I've created a secret called my-minio-cred in the same namespace where the workflow is running.
Here are the logs from the pod where the workflow is running:
time="2023-02-16T21:39:05.044Z" level=info msg="Starting Workflow Executor" version=v3.4.5
time="2023-02-16T21:39:05.047Z" level=info msg="Using executor retry strategy" Duration=1s Factor=1.6 Jitter=0.5 Steps=5
time="2023-02-16T21:39:05.047Z" level=info msg="Executor initialized" deadline="0001-01-01 00:00:00 +0000 UTC" includeScriptOutput=false namespace=argo podName=artifactory-repository-ref-5tcmt template="{\"name\":\"main\",\"inputs\":{},\"outputs\":{\"artifacts\":[{\"name\":\"hello_world\",\"path\":\"/tmp/hello_world.txt\"}]},\"metadata\":{},\"container\":{\"name\":\"\",\"image\":\"docker/whalesay:latest\",\"command\":[\"sh\",\"-c\"],\"args\":[\"cowsay hello world | tee /tmp/hello_world.txt\"],\"resources\":{}},\"archiveLocation\":{\"archiveLogs\":true,\"s3\":{\"endpoint\":\"http://127.0.0.1:52897\",\"bucket\":\"my-bucket\",\"insecure\":true,\"accessKeySecret\":{\"name\":\"my-minio-cred\",\"key\":\"accessKey\"},\"secretKeySecret\":{\"name\":\"my-minio-cred\",\"key\":\"secretKey\"},\"useSDKCreds\":true,\"key\":\"artifactory-repository-ref-5tcmt/artifactory-repository-ref-5tcmt\"}}}" version="&Version{Version:v3.4.5,BuildDate:2023-02-07T12:36:25Z,GitCommit:1253f443baa8ad1610d2e62ec26ecdc85fe1b837,GitTag:v3.4.5,GitTreeState:clean,GoVersion:go1.18.10,Compiler:gc,Platform:linux/arm64,}"
time="2023-02-16T21:39:05.047Z" level=info msg="Starting deadline monitor"
time="2023-02-16T21:39:08.048Z" level=info msg="Main container completed" error="<nil>"
time="2023-02-16T21:39:08.048Z" level=info msg="No Script output reference in workflow. Capturing script output ignored"
time="2023-02-16T21:39:08.048Z" level=info msg="No output parameters"
time="2023-02-16T21:39:08.048Z" level=info msg="Saving output artifacts"
time="2023-02-16T21:39:08.048Z" level=info msg="stopping progress monitor (context done)" error="context canceled"
time="2023-02-16T21:39:08.048Z" level=info msg="Deadline monitor stopped"
time="2023-02-16T21:39:08.048Z" level=info msg="Staging artifact: hello_world"
time="2023-02-16T21:39:08.049Z" level=info msg="Copying /tmp/hello_world.txt from container base image layer to /tmp/argo/outputs/artifacts/hello_world.tgz"
time="2023-02-16T21:39:08.049Z" level=info msg="/var/run/argo/outputs/artifacts/tmp/hello_world.txt.tgz -> /tmp/argo/outputs/artifacts/hello_world.tgz"
time="2023-02-16T21:39:08.049Z" level=info msg="S3 Save path: /tmp/argo/outputs/artifacts/hello_world.tgz, key: artifactory-repository-ref-5tcmt/artifactory-repository-ref-5tcmt/hello_world.tgz"
time="2023-02-16T21:39:08.049Z" level=info msg="Creating minio client using static credentials" endpoint="http://127.0.0.1:52897"
time="2023-02-16T21:39:08.049Z" level=warning msg="Non-transient error: Endpoint url cannot have fully qualified paths."
time="2023-02-16T21:39:08.049Z" level=info msg="Save artifact" artifactName=hello_world duration="282.917µs" error="failed to create new S3 client: Endpoint url cannot have fully qualified paths." key=artifactory-repository-ref-5tcmt/artifactory-repository-ref-5tcmt/hello_world.tgz
time="2023-02-16T21:39:08.049Z" level=error msg="executor error: failed to create new S3 client: Endpoint url cannot have fully qualified paths."
time="2023-02-16T21:39:08.049Z" level=info msg="S3 Save path: /tmp/argo/outputs/logs/main.log, key: artifactory-repository-ref-5tcmt/artifactory-repository-ref-5tcmt/main.log"
time="2023-02-16T21:39:08.049Z" level=info msg="Creating minio client using static credentials" endpoint="http://127.0.0.1:52897"
time="2023-02-16T21:39:08.049Z" level=warning msg="Non-transient error: Endpoint url cannot have fully qualified paths."
time="2023-02-16T21:39:08.049Z" level=info msg="Save artifact" artifactName=main-logs duration="28.5µs" error="failed to create new S3 client: Endpoint url cannot have fully qualified paths." key=artifactory-repository-ref-5tcmt/artifactory-repository-ref-5tcmt/main.log
time="2023-02-16T21:39:08.049Z" level=error msg="executor error: failed to create new S3 client: Endpoint url cannot have fully qualified paths."
time="2023-02-16T21:39:08.056Z" level=info msg="Create workflowtaskresults 403"
time="2023-02-16T21:39:08.056Z" level=warning msg="failed to patch task set, falling back to legacy/insecure pod patch, see https://argoproj.github.io/argo-workflows/workflow-rbac/" error="workflowtaskresults.argoproj.io is forbidden: User \"system:serviceaccount:argo:default\" cannot create resource \"workflowtaskresults\" in API group \"argoproj.io\" in the namespace \"argo\""
time="2023-02-16T21:39:08.057Z" level=info msg="Patch pods 403"
time="2023-02-16T21:39:08.057Z" level=warning msg="Non-transient error: pods \"artifactory-repository-ref-5tcmt\" is forbidden: User \"system:serviceaccount:argo:default\" cannot patch resource \"pods\" in API group \"\" in the namespace \"argo\""
time="2023-02-16T21:39:08.057Z" level=error msg="executor error: pods \"artifactory-repository-ref-5tcmt\" is forbidden: User \"system:serviceaccount:argo:default\" cannot patch resource \"pods\" in API group \"\" in the namespace \"argo\""
time="2023-02-16T21:39:08.057Z" level=info msg="Alloc=6350 TotalAlloc=12366 Sys=18642 NumGC=4 Goroutines=5"
time="2023-02-16T21:39:08.057Z" level=fatal msg="failed to create new S3 client: Endpoint url cannot have fully qualified paths."
I've tried changing the endpoint key in the workflow-controller-config.yaml from 127.0.0.1:52139 to 127.0.0.1:9000 and also argo-artifacts:9000 but it still doesn't work. argo-artifacts is the name of the LoadBalancer service thats created by the helm install argo-artifacts minio/minio command.
I got the endpoint of the MinIO bucket from
minikube service --url argo-artifacts as given in the 'Configuring MinIO' section at https://argoproj.github.io/argo-workflows/configure-artifact-repository/#configuring-minio
Everything is in the same namespace.
What could be wrong here?
I tried changing the endpoint URL of the MinIO bucket, changing namespaces for different components, and changing the namespace that the argo-artifacts service gets deployed in.
I am trying to use the ArgoCd-image-updater , but it is giving me the below error
time="2022-09-13T15:40:02Z" level=debug msg="Using version constraint '^0.1' when looking for a new tag" alias= application=ms-echoserver-imageupdate-test image_name=test-build/argo-imageupdater-test image_tag=0.9 registry=gcr.io
time="2022-09-13T15:40:02Z" level=error msg="Could not get tags from registry: denied: Failed to read tags for host 'gcr.io', repository '/v2/test-build/argo-imageupdater-test/tags/list'" alias= application=ms-echoserver-imageupdate-test image_name=test-build/argo-imageupdater-test image_tag=0.9 registry=gcr.io
time="2022-09-13T15:40:02Z" level=info msg="Processing results: applications=1 images_considered=1 images_skipped=0 images_updated=0 errors=1"
My argocd image updater config file:-
apiVersion: v1
kind: ConfigMap
metadata:
name: argocd-image-updater-config
labels:
app.kubernetes.io/name: argocd-image-updater-config
app.kubernetes.io/part-of: argocd-image-updater
data:
log.level: debug
registries.conf: |
registries:
- name: Google Container Registry
api_url: https://gcr.io
ping: no
prefix: gcr.io
credentials: pullsecret:argocd/gcr-imageupdater
#credentials: secret:argocd/sundayy#creds
Note:- Secret is having owner permission.
Am in very early stages of exploring Argo with Spark operator to run Spark samples on the minikube setup on my EC2 instance.
Following are the resources details, not sure why am not able to see the spark app logs.
WORKFLOW.YAML
kind: Workflow
metadata:
name: spark-argo-groupby
spec:
entrypoint: sparkling-operator
templates:
- name: spark-groupby
resource:
action: create
manifest: |
apiVersion: "sparkoperator.k8s.io/v1beta2"
kind: SparkApplication
metadata:
generateName: spark-argo-groupby
spec:
type: Scala
mode: cluster
image: gcr.io/spark-operator/spark:v3.0.3
imagePullPolicy: Always
mainClass: org.apache.spark.examples.GroupByTest
mainApplicationFile: local:///opt/spark/spark-examples_2.12-3.1.1-hadoop-2.7.jar
sparkVersion: "3.0.3"
driver:
cores: 1
coreLimit: "1200m"
memory: "512m"
labels:
version: 3.0.0
executor:
cores: 1
instances: 1
memory: "512m"
labels:
version: 3.0.0
- name: sparkling-operator
dag:
tasks:
- name: SparkGroupBY
template: spark-groupby
ROLES
# Role for spark-on-k8s-operator to create resources on cluster
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: spark-cluster-cr
labels:
rbac.authorization.kubeflow.org/aggregate-to-kubeflow-edit: "true"
rules:
- apiGroups:
- sparkoperator.k8s.io
resources:
- sparkapplications
verbs:
- '*'
---
# Allow airflow-worker service account access for spark-on-k8s
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: argo-spark-crb
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: spark-cluster-cr
subjects:
- kind: ServiceAccount
name: default
namespace: argo
ARGO UI
To dig deep I tried all the steps that's listed on https://dev.to/crenshaw_dev/how-to-debug-an-argo-workflow-31ng yet could not get app logs.
Basically when I run these examples am expecting spark app logs to be printed - in this case output of following Scala example
https://github.com/apache/spark/blob/master/examples/src/main/scala/org/apache/spark/examples/GroupByTest.scala
Interesting when I list PODS, I was expecting to see driver pods and executor pods but always see only one POD and it has above logs as in the image attached. Please help me to understand why logs are not generated and how can I get it?
RAW LOGS
$ kubectl logs spark-pi-dag-739246604 -n argo
time="2021-12-10T13:28:09.560Z" level=info msg="Starting Workflow Executor" version="{v3.0.3 2021-05-11T21:14:20Z 02071057c082cf295ab8da68f1b2027ff8762b5a v3.0.3 clean go1.15.7 gc linux/amd64}"
time="2021-12-10T13:28:09.581Z" level=info msg="Creating a docker executor"
time="2021-12-10T13:28:09.581Z" level=info msg="Executor (version: v3.0.3, build_date: 2021-05-11T21:14:20Z) initialized (pod: argo/spark-pi-dag-739246604) with template:\n{\"name\":\"sparkpi\",\"inputs\":{},\"outputs\":{},\"metadata\":{},\"resource\":{\"action\":\"create\",\"manifest\":\"apiVersion: \\\"sparkoperator.k8s.io/v1beta2\\\"\\nkind: SparkApplication\\nmetadata:\\n generateName: spark-pi-dag\\nspec:\\n type: Scala\\n mode: cluster\\n image: gjeevanm/spark:v3.1.1\\n imagePullPolicy: Always\\n mainClass: org.apache.spark.examples.SparkPi\\n mainApplicationFile: local:///opt/spark/spark-examples_2.12-3.1.1-hadoop-2.7.jar\\n sparkVersion: 3.1.1\\n driver:\\n cores: 1\\n coreLimit: \\\"1200m\\\"\\n memory: \\\"512m\\\"\\n labels:\\n version: 3.0.0\\n executor:\\n cores: 1\\n instances: 1\\n memory: \\\"512m\\\"\\n labels:\\n version: 3.0.0\\n\"},\"archiveLocation\":{\"archiveLogs\":true,\"s3\":{\"endpoint\":\"minio:9000\",\"bucket\":\"my-bucket\",\"insecure\":true,\"accessKeySecret\":{\"name\":\"my-minio-cred\",\"key\":\"accesskey\"},\"secretKeySecret\":{\"name\":\"my-minio-cred\",\"key\":\"secretkey\"},\"key\":\"spark-pi-dag/spark-pi-dag-739246604\"}}}"
time="2021-12-10T13:28:09.581Z" level=info msg="Loading manifest to /tmp/manifest.yaml"
time="2021-12-10T13:28:09.581Z" level=info msg="kubectl create -f /tmp/manifest.yaml -o json"
time="2021-12-10T13:28:10.348Z" level=info msg=argo/SparkApplication.sparkoperator.k8s.io/spark-pi-daghhl6s
time="2021-12-10T13:28:10.348Z" level=info msg="Starting SIGUSR2 signal monitor"
time="2021-12-10T13:28:10.348Z" level=info msg="No output parameters"
As Michael mentioned in his answer, Argo Workflows does not know how other CRDs (such as SparkApplication that you used) work and thus could not pull the logs from the pods created by that particular CRD.
However, you can add the label workflows.argoproj.io/workflow: {{workflow.name}} to the pods generated by SparkApplication to let Argo Workflows know and then use argo logs -c <container-name> to pull the logs from those pods.
You can find an example here but Kubeflow CRD but in your case you'll want to add labels to the executor and driver to your SparkApplication CRD in the resource template: https://github.com/argoproj/argo-workflows/blob/master/examples/k8s-resource-log-selector.yaml
Argo Workflows' resource templates (like your spark-groupby template) are simplistic. The Workflow controller is running kubectl create, and that's where its involvement in the SparkApplication ends.
The logs you're seeing from the Argo Workflow pod describe the kubectl create process. Your resource is written to a temporary yaml file and then applied to the cluster.
time="2021-12-10T13:28:09.581Z" level=info msg="Loading manifest to /tmp/manifest.yaml"
time="2021-12-10T13:28:09.581Z" level=info msg="kubectl create -f /tmp/manifest.yaml -o json"
time="2021-12-10T13:28:10.348Z" level=info msg=argo/SparkApplication.sparkoperator.k8s.io/spark-pi-daghhl6s
Old answer:
To view the logs generated by your SparkApplication, you'll need to
follow the Spark docs. I'm not familiar, but I'm guessing the
application gets run in a Pod somewhere. If you can find that pod, you
should be able to view the logs with kubectl logs.
It would be really cool if Argo Workflows could pull Spark logs into
its UI. But building a generic solution would probably be
prohibitively difficult.
Update:
Check Yuan's answer. There's a way to pull the Spark logs into the Workflows CLI!
I have the following values set to my velero configuration that was installed using helm.
schedules:
my-schedule:
schedule: "5 * * * *"
template:
includeClusterResources: true
includedNamespaces:
- jenkins
includedResources:
- 'pvcs'
storageLocation: backups
snapshotVolumes: true
ttl: 24h0m0s
I had a PVC (and an underlying PV that had been dynamically provisioned) which I manually deleted (alongside with the PV).
I then performed a velero restore (pointing to a backup taken prior to PV/PVC deletions of course) as in:
velero restore create --from-backup velero-hourly-backup-20201119140005 --include-resources persistentvolumeclaims -n extra-services
extra-services is the namespace where velero is deployed btw.
Although the logs indicate the restore was successful:
▶ velero restore logs velero-hourly-backup-20201119140005-20201119183805 -n extra-services
time="2020-11-19T16:38:06Z" level=info msg="starting restore" logSource="pkg/controller/restore_controller.go:467" restore=extra-services/velero-hourly-backup-20201119140005-20201119183805
time="2020-11-19T16:38:06Z" level=info msg="Starting restore of backup extra-services/velero-hourly-backup-20201119140005" logSource="pkg/restore/restore.go:363" restore=extra-services/velero-hourly-backup-20201119140005-20201119183805
time="2020-11-19T16:38:06Z" level=info msg="restore completed" logSource="pkg/controller/restore_controller.go:482" restore=extra-services/velero-hourly-backup-20201119140005-20201119183805
I see the following error in the restore description:
Name: velero-hourly-backup-20201119140005-20201119183805
Namespace: extra-services
Labels: <none>
Annotations: <none>
Phase: PartiallyFailed (run 'velero restore logs velero-hourly-backup-20201119140005-20201119183805' for more information)
Started: 2020-11-19 18:38:05 +0200 EET
Completed: 2020-11-19 18:38:07 +0200 EET
Errors:
Velero: error parsing backup contents: directory "resources" does not exist
Cluster: <none>
Namespaces: <none>
Backup: velero-hourly-backup-20201119140005
Namespaces:
Included: all namespaces found in the backup
Excluded: <none>
Resources:
Included: persistentvolumeclaims
Excluded: nodes, events, events.events.k8s.io, backups.velero.io, restores.velero.io, resticrepositories.velero.io
Cluster-scoped: auto
Namespace mappings: <none>
Label selector: <none>
Restore PVs: auto
Any ideas?
Does this having to do with me deleting PV/PVC? (after all I tried to simulate a disaster situation)
I have both backupsEnabled and snapshotsEnabled set to true.
I deployed prometheus + grafana using google click-to-deploy setup to my kubernetes cluster. Kubernetes is 1.14.10-gke.36. Unfortunately prometheus wasn't able to start - it continously starts and then gets terminated due to Opening storage failed open /data/XXX/meta.json: no such file or directory.
I haven't changed anything in the setup. Does anyone know how to solve this ?
My logs:
E 2020-05-29T14:50:09.823814201Z level=info ts=2020-05-29T14:50:09.81844866Z caller=main.go:220 msg="Starting Prometheus" version="(version=2.2.1, branch=HEAD, revision=bc6058c81272a8d938c05e75607371284236aadc)"
E 2020-05-29T14:50:09.823909260Z level=info ts=2020-05-29T14:50:09.818563567Z caller=main.go:221 build_context="(go=go1.10, user=root#149e5b3f0829, date=20180314-14:15:45)"
E 2020-05-29T14:50:09.823917570Z level=info ts=2020-05-29T14:50:09.818590869Z caller=main.go:222 host_details="(Linux 4.14.138+ #1 SMP Tue Sep 3 02:58:08 PDT 2019 x86_64 prometheus-1-prometheus-1 (none))"
E 2020-05-29T14:50:09.823924055Z level=info ts=2020-05-29T14:50:09.818612642Z caller=main.go:223 fd_limits="(soft=1048576, hard=1048576)"
E 2020-05-29T14:50:09.828319848Z level=info ts=2020-05-29T14:50:09.828105376Z caller=web.go:382 component=web msg="Start listening for connections" address=0.0.0.0:9090
E 2020-05-29T14:50:09.909997968Z level=info ts=2020-05-29T14:50:09.828059101Z caller=main.go:504 msg="Starting TSDB ..."
E 2020-05-29T14:50:09.911528108Z level=info ts=2020-05-29T14:50:09.911319586Z caller=main.go:398 msg="Stopping scrape discovery manager..."
E 2020-05-29T14:50:09.911559337Z level=info ts=2020-05-29T14:50:09.91137533Z caller=main.go:411 msg="Stopping notify discovery manager..."
E 2020-05-29T14:50:09.911600849Z level=info ts=2020-05-29T14:50:09.911396384Z caller=main.go:432 msg="Stopping scrape manager..."
E 2020-05-29T14:50:09.911611998Z level=info ts=2020-05-29T14:50:09.9114095Z caller=main.go:394 msg="Scrape discovery manager stopped"
E 2020-05-29T14:50:09.911617814Z level=info ts=2020-05-29T14:50:09.911434087Z caller=manager.go:460 component="rule manager" msg="Stopping rule manager..."
E 2020-05-29T14:50:09.911624146Z level=info ts=2020-05-29T14:50:09.911468881Z caller=manager.go:466 component="rule manager" msg="Rule manager stopped"
E 2020-05-29T14:50:09.911630066Z level=info ts=2020-05-29T14:50:09.911546355Z caller=notifier.go:512 component=notifier msg="Stopping notification manager..."
E 2020-05-29T14:50:09.911742492Z level=info ts=2020-05-29T14:50:09.911620851Z caller=main.go:407 msg="Notify discovery manager stopped"
E 2020-05-29T14:50:09.911807858Z level=info ts=2020-05-29T14:50:09.911746592Z caller=main.go:573 msg="Notifier manager stopped"
E 2020-05-29T14:50:09.911864605Z level=info ts=2020-05-29T14:50:09.91179338Z caller=main.go:426 msg="Scrape manager stopped"
E 2020-05-29T14:50:09.919909034Z level=error ts=2020-05-29T14:50:09.919646048Z caller=main.go:582 err="Opening storage failed open /data/01D37JS32JWMR54HQXBQCRJW1V/meta.json: no such file or directory"
E 2020-05-29T14:50:09.919945114Z level=info ts=2020-05-29T14:50:09.91972603Z caller=main.go:584 msg="See you next time!"