Cannot connect to haproxy exporter - haproxy

I have configured HAProxy locally in my machine and I have tried to read the metrics of it using haproxy_exporter.
I am using the docker image of the haproxy_exporter and used the below-given command to execute it.
docker run -p 9101:9101 prom/haproxy-exporter:latest --haproxy.scrape-uri="http://localhost:8181/stats;csv"
If I try to reach the metric endpoint, I am getting this error saying that the connection is refused. What am I doing wrong?
ts=2022-05-06T10:03:49.462Z caller=haproxy_exporter.go:584 level=info msg="Starting haproxy_exporter" version="(version=0.13.0, branch=HEAD, revision=c5c72aa059b69c18ab38fd63777653c13eddaa7f)"
ts=2022-05-06T10:03:49.462Z caller=haproxy_exporter.go:585 level=info msg="Build context" context="(go=go1.17.3, user=root#77b4a325967c, date=20211126-09:54:41)"
ts=2022-05-06T10:03:49.462Z caller=haproxy_exporter.go:603 level=info msg="Listening on address" address=:9101
ts=2022-05-06T10:03:49.464Z caller=tls_config.go:195 level=info msg="TLS is disabled." http2=false
ts=2022-05-06T10:03:56.366Z caller=haproxy_exporter.go:399 level=error msg="Can't scrape HAProxy" err="Get \"http://localhost:8181/stats;csv\": dial tcp 127.0.0.1:8181: connect: connection refused"

Related

Traefik Error use of closed network connection

I have problem with TCP, UDP EntryPoints
time="2022-08-03T10:12:16Z" level=error msg="accept tcp [::]:3478: use of closed network connection" entryPointName=tcp3478
time="2022-08-03T10:12:16Z" level=error msg="Error while starting server: accept tcp [::]:3478: use of closed network connection" entryPointName=tcp3478
time="2022-08-03T10:12:16Z" level=error msg="accept tcp [::]:80: use of closed network connection" entryPointName=http
time="2022-08-03T10:12:16Z" level=error msg="Error while starting server: accept tcp [::]:80: use of closed network connection" entryPointName=http
time="2022-08-03T10:12:16Z" level=error msg="accept tcp [::]:443: use of closed network connection" entryPointName=https
time="2022-08-03T10:12:16Z" level=error msg="Error while starting server: accept tcp [::]:443: use of closed network connection" entryPointName=https
time="2022-08-03T10:12:16Z" level=error msg="accept tcp [::]:57772: use of closed network connection" entryPointName=tcp57772
time="2022-08-03T10:12:16Z" level=error msg="Error while starting server: accept tcp [::]:57772: use of closed network connection" entryPointName=tcp57772
my Traefik.yaml:
enter image description here
This did not solve it for me: https://community.traefik.io/t/2-3-errors-on-container-start-is-this-someting-to-worry-about/8438/8
I assumed this was a startup error (because it says Error while starting server) but then I noticed it only occurred when I stopped the docker container. So I currently ignore it because everything works fine. Unfortunately I was not able to create a minimal reproducible example, otherwise I would have filed a bug report.

How to communicate with a gitlab service container

I have the following .gitlab-ci.yml file:
stages:
- scan
scanning:
stage: scan
image: docker:19.03.6
services:
- name: arminc/clair-db:latest
- name: docker:dind
before_script:
- docker login -u $DOCKER_USERNAME -p $DOCKER_PASSWORD
script:
- export LOCAL_MACHINE_IP_ADDRESS=arminc-clair-db
- ping -c 4 $LOCAL_MACHINE_IP_ADDRESS:5432 #Pinging 'arminc-clair-db:5432' to prove that it IS accessible
- docker run --interactive --rm --volume "$PWD":/tmp/app -e CI_PROJECT_DIR=/tmp/app -e CLAIR_DB_CONNECTION_STRING="postgresql://postgres:password#${LOCAL_MACHINE_IP_ADDRESS}:5432/postgres?sslmode=disable&statement_timeout=60000" -e CI_APPLICATION_REPOSITORY=vismarkjuarez1994/codigo-initiative -e CI_APPLICATION_TAG=latest registry.gitlab.com/gitlab-org/security-products/analyzers/klar
Everything runs just fine until the last script command, because the host arminc-clair-db:5432 (which is a service) cannot be resolved. How do I get my docker container to "see" and communicate with the arminc/clair-db container?
Below are all the output logs, with the error at the bottom:
Running with gitlab-runner 13.1.0 (6214287e)
on docker-auto-scale fa6cab46
Preparing the "docker+machine" executor
01:18
Using Docker executor with image docker:19.03.6 ...
Starting service arminc/clair-db:latest ...
Pulling docker image arminc/clair-db:latest ...
Using docker image sha256:032e46f9e42c3f26280ed984de737e5d3d1ca99bb641414b13226c6c62556feb for arminc/clair-db:latest ...
Starting service docker:dind ...
Pulling docker image docker:dind ...
Using docker image sha256:d5d139be840a6ffa04348fc87740e8c095cade6e9cb977785fdba51e5fd7ffec for docker:dind ...
Waiting for services to be up and running...
*** WARNING: Service runner-fa6cab46-project-19334692-concurrent-0-f3fcf99fb2cfbb7e-docker-1 probably didn't start properly.
Health check error:
service "runner-fa6cab46-project-19334692-concurrent-0-f3fcf99fb2cfbb7e-docker-1-wait-for-service" timeout
Health check container logs:
Service container logs:
2020-07-13T17:20:21.309966089Z time="2020-07-13T17:20:21.294373440Z" level=info msg="Starting up"
2020-07-13T17:20:21.310002707Z time="2020-07-13T17:20:21.300572503Z" level=warning msg="could not change group /var/run/docker.sock to docker: group docker not found"
2020-07-13T17:20:21.310007147Z time="2020-07-13T17:20:21.302510800Z" level=warning msg="[!] DON'T BIND ON ANY IP ADDRESS WITHOUT setting --tlsverify IF YOU DON'T KNOW WHAT YOU'RE DOING [!]"
2020-07-13T17:20:21.310010666Z time="2020-07-13T17:20:21.307985849Z" level=info msg="libcontainerd: started new containerd process" pid=18
2020-07-13T17:20:21.320081834Z time="2020-07-13T17:20:21.312042380Z" level=info msg="parsed scheme: \"unix\"" module=grpc
2020-07-13T17:20:21.320098887Z time="2020-07-13T17:20:21.312066080Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
2020-07-13T17:20:21.320103371Z time="2020-07-13T17:20:21.312091196Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0 <nil>}] <nil>}" module=grpc
2020-07-13T17:20:21.320107440Z time="2020-07-13T17:20:21.312100727Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
2020-07-13T17:20:21.409230132Z time="2020-07-13T17:20:21.365046801Z" level=info msg="starting containerd" revision=7ad184331fa3e55e52b890ea95e65ba581ae3429 version=v1.2.13
2020-07-13T17:20:21.409258920Z time="2020-07-13T17:20:21.380378683Z" level=info msg="loading plugin "io.containerd.content.v1.content"..." type=io.containerd.content.v1
2020-07-13T17:20:21.409263750Z time="2020-07-13T17:20:21.380477131Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.btrfs"..." type=io.containerd.snapshotter.v1
2020-07-13T17:20:21.409268507Z time="2020-07-13T17:20:21.380709522Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.btrfs" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter"
2020-07-13T17:20:21.409274034Z time="2020-07-13T17:20:21.380721072Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.aufs"..." type=io.containerd.snapshotter.v1
2020-07-13T17:20:21.409277876Z time="2020-07-13T17:20:21.401607600Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.aufs" error="modprobe aufs failed: "ip: can't find device 'aufs'\nmodprobe: can't change directory to '/lib/modules': No such file or directory\n": exit status 1"
2020-07-13T17:20:21.409282127Z time="2020-07-13T17:20:21.401634659Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.native"..." type=io.containerd.snapshotter.v1
2020-07-13T17:20:21.409286166Z time="2020-07-13T17:20:21.401762230Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.overlayfs"..." type=io.containerd.snapshotter.v1
2020-07-13T17:20:21.409290021Z time="2020-07-13T17:20:21.402131753Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.zfs"..." type=io.containerd.snapshotter.v1
2020-07-13T17:20:21.409293511Z time="2020-07-13T17:20:21.402413697Z" level=info msg="skip loading plugin "io.containerd.snapshotter.v1.zfs"..." type=io.containerd.snapshotter.v1
2020-07-13T17:20:21.409296992Z time="2020-07-13T17:20:21.402424580Z" level=info msg="loading plugin "io.containerd.metadata.v1.bolt"..." type=io.containerd.metadata.v1
2020-07-13T17:20:21.409309649Z time="2020-07-13T17:20:21.402470351Z" level=warning msg="could not use snapshotter btrfs in metadata plugin" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter"
2020-07-13T17:20:21.409313855Z time="2020-07-13T17:20:21.402491305Z" level=warning msg="could not use snapshotter aufs in metadata plugin" error="modprobe aufs failed: "ip: can't find device 'aufs'\nmodprobe: can't change directory to '/lib/modules': No such file or directory\n": exit status 1"
2020-07-13T17:20:21.409317643Z time="2020-07-13T17:20:21.402498473Z" level=warning msg="could not use snapshotter zfs in metadata plugin" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin"
2020-07-13T17:20:21.422337155Z time="2020-07-13T17:20:21.415144189Z" level=info msg="loading plugin "io.containerd.differ.v1.walking"..." type=io.containerd.differ.v1
2020-07-13T17:20:21.422366287Z time="2020-07-13T17:20:21.415177490Z" level=info msg="loading plugin "io.containerd.gc.v1.scheduler"..." type=io.containerd.gc.v1
2020-07-13T17:20:21.422370656Z time="2020-07-13T17:20:21.415220797Z" level=info msg="loading plugin "io.containerd.service.v1.containers-service"..." type=io.containerd.service.v1
2020-07-13T17:20:21.422374289Z time="2020-07-13T17:20:21.415238800Z" level=info msg="loading plugin "io.containerd.service.v1.content-service"..." type=io.containerd.service.v1
2020-07-13T17:20:21.422377575Z time="2020-07-13T17:20:21.415249745Z" level=info msg="loading plugin "io.containerd.service.v1.diff-service"..." type=io.containerd.service.v1
2020-07-13T17:20:21.422380973Z time="2020-07-13T17:20:21.415261483Z" level=info msg="loading plugin "io.containerd.service.v1.images-service"..." type=io.containerd.service.v1
2020-07-13T17:20:21.422384239Z time="2020-07-13T17:20:21.415285048Z" level=info msg="loading plugin "io.containerd.service.v1.leases-service"..." type=io.containerd.service.v1
2020-07-13T17:20:21.422387513Z time="2020-07-13T17:20:21.415296352Z" level=info msg="loading plugin "io.containerd.service.v1.namespaces-service"..." type=io.containerd.service.v1
2020-07-13T17:20:21.422390735Z time="2020-07-13T17:20:21.415306615Z" level=info msg="loading plugin "io.containerd.service.v1.snapshots-service"..." type=io.containerd.service.v1
2020-07-13T17:20:21.422393948Z time="2020-07-13T17:20:21.415317274Z" level=info msg="loading plugin "io.containerd.runtime.v1.linux"..." type=io.containerd.runtime.v1
2020-07-13T17:20:21.422397113Z time="2020-07-13T17:20:21.415539917Z" level=info msg="loading plugin "io.containerd.runtime.v2.task"..." type=io.containerd.runtime.v2
2020-07-13T17:20:21.422400411Z time="2020-07-13T17:20:21.415661653Z" level=info msg="loading plugin "io.containerd.monitor.v1.cgroups"..." type=io.containerd.monitor.v1
2020-07-13T17:20:21.422403653Z time="2020-07-13T17:20:21.416302031Z" level=info msg="loading plugin "io.containerd.service.v1.tasks-service"..." type=io.containerd.service.v1
2020-07-13T17:20:21.422406821Z time="2020-07-13T17:20:21.416331092Z" level=info msg="loading plugin "io.containerd.internal.v1.restart"..." type=io.containerd.internal.v1
2020-07-13T17:20:21.422421577Z time="2020-07-13T17:20:21.416366348Z" level=info msg="loading plugin "io.containerd.grpc.v1.containers"..." type=io.containerd.grpc.v1
2020-07-13T17:20:21.422425309Z time="2020-07-13T17:20:21.416378191Z" level=info msg="loading plugin "io.containerd.grpc.v1.content"..." type=io.containerd.grpc.v1
2020-07-13T17:20:21.422428496Z time="2020-07-13T17:20:21.416388907Z" level=info msg="loading plugin "io.containerd.grpc.v1.diff"..." type=io.containerd.grpc.v1
2020-07-13T17:20:21.422431715Z time="2020-07-13T17:20:21.416399419Z" level=info msg="loading plugin "io.containerd.grpc.v1.events"..." type=io.containerd.grpc.v1
2020-07-13T17:20:21.422434873Z time="2020-07-13T17:20:21.416410316Z" level=info msg="loading plugin "io.containerd.grpc.v1.healthcheck"..." type=io.containerd.grpc.v1
2020-07-13T17:20:21.422438168Z time="2020-07-13T17:20:21.416421136Z" level=info msg="loading plugin "io.containerd.grpc.v1.images"..." type=io.containerd.grpc.v1
2020-07-13T17:20:21.422441391Z time="2020-07-13T17:20:21.416431148Z" level=info msg="loading plugin "io.containerd.grpc.v1.leases"..." type=io.containerd.grpc.v1
2020-07-13T17:20:21.422444483Z time="2020-07-13T17:20:21.416441183Z" level=info msg="loading plugin "io.containerd.grpc.v1.namespaces"..." type=io.containerd.grpc.v1
2020-07-13T17:20:21.422447730Z time="2020-07-13T17:20:21.416450302Z" level=info msg="loading plugin "io.containerd.internal.v1.opt"..." type=io.containerd.internal.v1
2020-07-13T17:20:21.422450979Z time="2020-07-13T17:20:21.416686499Z" level=info msg="loading plugin "io.containerd.grpc.v1.snapshots"..." type=io.containerd.grpc.v1
2020-07-13T17:20:21.422454181Z time="2020-07-13T17:20:21.416699740Z" level=info msg="loading plugin "io.containerd.grpc.v1.tasks"..." type=io.containerd.grpc.v1
2020-07-13T17:20:21.422457333Z time="2020-07-13T17:20:21.416710157Z" level=info msg="loading plugin "io.containerd.grpc.v1.version"..." type=io.containerd.grpc.v1
2020-07-13T17:20:21.422460460Z time="2020-07-13T17:20:21.416720291Z" level=info msg="loading plugin "io.containerd.grpc.v1.introspection"..." type=io.containerd.grpc.v1
2020-07-13T17:20:21.422463733Z time="2020-07-13T17:20:21.417000522Z" level=info msg=serving... address="/var/run/docker/containerd/containerd-debug.sock"
2020-07-13T17:20:21.422466909Z time="2020-07-13T17:20:21.417067077Z" level=info msg=serving... address="/var/run/docker/containerd/containerd.sock"
2020-07-13T17:20:21.422470029Z time="2020-07-13T17:20:21.417076538Z" level=info msg="containerd successfully booted in 0.053678s"
2020-07-13T17:20:21.522992752Z time="2020-07-13T17:20:21.445278000Z" level=info msg="Setting the storage driver from the $DOCKER_DRIVER environment variable (overlay2)"
2020-07-13T17:20:21.523023868Z time="2020-07-13T17:20:21.445485273Z" level=info msg="parsed scheme: \"unix\"" module=grpc
2020-07-13T17:20:21.523028739Z time="2020-07-13T17:20:21.445497879Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
2020-07-13T17:20:21.523032708Z time="2020-07-13T17:20:21.445515130Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0 <nil>}] <nil>}" module=grpc
2020-07-13T17:20:21.523047388Z time="2020-07-13T17:20:21.445523771Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
2020-07-13T17:20:21.523051107Z time="2020-07-13T17:20:21.446951417Z" level=info msg="parsed scheme: \"unix\"" module=grpc
2020-07-13T17:20:21.523054385Z time="2020-07-13T17:20:21.446967002Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
2020-07-13T17:20:21.523058180Z time="2020-07-13T17:20:21.446982738Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0 <nil>}] <nil>}" module=grpc
2020-07-13T17:20:21.523061840Z time="2020-07-13T17:20:21.446991531Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
2020-07-13T17:20:21.523065052Z time="2020-07-13T17:20:21.499800243Z" level=info msg="Loading containers: start."
2020-07-13T17:20:21.529844429Z time="2020-07-13T17:20:21.527114868Z" level=warning msg="Running modprobe bridge br_netfilter failed with message: ip: can't find device 'bridge'\nbridge 167936 1 br_netfilter\nstp 16384 1 bridge\nllc 16384 2 bridge,stp\nip: can't find device 'br_netfilter'\nbr_netfilter 24576 0 \nbridge 167936 1 br_netfilter\nmodprobe: can't change directory to '/lib/modules': No such file or directory\n, error: exit status 1"
2020-07-13T17:20:21.680323559Z time="2020-07-13T17:20:21.642534057Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.18.0.0/16. Daemon option --bip can be used to set a preferred IP address"
2020-07-13T17:20:21.715621265Z time="2020-07-13T17:20:21.712210528Z" level=info msg="Loading containers: done."
2020-07-13T17:20:21.741819560Z time="2020-07-13T17:20:21.740973136Z" level=info msg="Docker daemon" commit=48a66213fe graphdriver(s)=overlay2 version=19.03.12
2020-07-13T17:20:21.743421190Z time="2020-07-13T17:20:21.741147945Z" level=info msg="Daemon has completed initialization"
2020-07-13T17:20:21.858939063Z time="2020-07-13T17:20:21.849737479Z" level=info msg="API listen on [::]:2375"
2020-07-13T17:20:21.859149575Z time="2020-07-13T17:20:21.859084136Z" level=info msg="API listen on /var/run/docker.sock"
*********
Pulling docker image docker:19.03.6 ...
Using docker image sha256:6512892b576811235f68a6dcd5fbe10b387ac0ba3709aeaf80cd5cfcecb387c7 for docker:19.03.6 ...
Preparing environment
00:03
Running on runner-fa6cab46-project-19334692-concurrent-0 via runner-fa6cab46-srm-1594660737-ea79a1df...
Getting source from Git repository
00:04
$ eval "$CI_PRE_CLONE_SCRIPT"
Fetching changes with git depth set to 50...
Initialized empty Git repository in /builds/vismarkjuarez/car-assembly-line/.git/
Created fresh repository.
Checking out 71a14f15 as master...
Skipping Git submodules setup
Executing "step_script" stage of the job script
00:27
$ docker login -u $DOCKER_USERNAME -p $DOCKER_PASSWORD
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded
$ export LOCAL_MACHINE_IP_ADDRESS=arminc-clair-db
$ ping -c 4 $LOCAL_MACHINE_IP_ADDRESS:5432
PING arminc-clair-db:5432 (172.17.0.3): 56 data bytes
64 bytes from 172.17.0.3: seq=0 ttl=64 time=0.096 ms
64 bytes from 172.17.0.3: seq=1 ttl=64 time=0.086 ms
64 bytes from 172.17.0.3: seq=2 ttl=64 time=0.079 ms
64 bytes from 172.17.0.3: seq=3 ttl=64 time=0.081 ms
--- arminc-clair-db:5432 ping statistics ---
4 packets transmitted, 4 packets received, 0% packet loss
round-trip min/avg/max = 0.079/0.085/0.096 ms
$ docker run --interactive --rm --volume "$PWD":/tmp/app -e CI_PROJECT_DIR=/tmp/app -e CLAIR_DB_CONNECTION_STRING="postgresql://postgres:password#${LOCAL_MACHINE_IP_ADDRESS}:5432/postgres?sslmode=disable&statement_timeout=60000" -e CI_APPLICATION_REPOSITORY=[MASKED]/codigo-initiative -e CI_APPLICATION_TAG=latest registry.gitlab.com/gitlab-org/security-products/analyzers/klar
Unable to find image 'registry.gitlab.com/gitlab-org/security-products/analyzers/klar:latest' locally
latest: Pulling from gitlab-org/security-products/analyzers/klar
c9b1b535fdd9: Pulling fs layer
9d4a5a860853: Pulling fs layer
f02644185e38: Pulling fs layer
01f6d8f93d4f: Pulling fs layer
6756a56563fe: Pulling fs layer
01f6d8f93d4f: Waiting
6756a56563fe: Waiting
9d4a5a860853: Verifying Checksum
9d4a5a860853: Download complete
01f6d8f93d4f: Verifying Checksum
01f6d8f93d4f: Download complete
c9b1b535fdd9: Verifying Checksum
c9b1b535fdd9: Download complete
f02644185e38: Verifying Checksum
f02644185e38: Download complete
6756a56563fe: Verifying Checksum
6756a56563fe: Download complete
c9b1b535fdd9: Pull complete
9d4a5a860853: Pull complete
f02644185e38: Pull complete
01f6d8f93d4f: Pull complete
6756a56563fe: Pull complete
Digest: sha256:229558a024e5a1c6ca5b1ed67bc13f6eeca15d4cd63c278ef6c3efa357630bde
Status: Downloaded newer image for registry.gitlab.com/gitlab-org/security-products/analyzers/klar:latest
[INFO] [klar] [2020-07-13T17:21:13Z] ▶ GitLab klar analyzer v2.4.8
[WARN] [klar] [2020-07-13T17:21:13Z] ▶ Allowlist file with path '/tmp/app/clair-whitelist.yml' does not exist, skipping
[WARN] [klar] [2020-07-13T17:21:13Z] ▶ Allowlist file with path '/tmp/app/vulnerability-allowlist.yml' does not exist, skipping
[INFO] [klar] [2020-07-13T17:21:13Z] ▶ DOCKER_USER and DOCKER_PASSWORD environment variables have not been configured. Defaulting to DOCKER_USER=$CI_REGISTRY_USER and DOCKER_PASSWORD=$CI_REGISTRY_PASSWORD
[WARN] [klar] [2020-07-13T17:21:14Z] ▶ Vulnerabilities database not ready, waiting 2s before retrying. Retry 1 of 10
[WARN] [klar] [2020-07-13T17:21:16Z] ▶ Vulnerabilities database not ready, waiting 2s before retrying. Retry 2 of 10
[WARN] [klar] [2020-07-13T17:21:18Z] ▶ Vulnerabilities database not ready, waiting 2s before retrying. Retry 3 of 10
[WARN] [klar] [2020-07-13T17:21:20Z] ▶ Vulnerabilities database not ready, waiting 2s before retrying. Retry 4 of 10
[WARN] [klar] [2020-07-13T17:21:22Z] ▶ Vulnerabilities database not ready, waiting 2s before retrying. Retry 5 of 10
[WARN] [klar] [2020-07-13T17:21:24Z] ▶ Vulnerabilities database not ready, waiting 2s before retrying. Retry 6 of 10
[WARN] [klar] [2020-07-13T17:21:26Z] ▶ Vulnerabilities database not ready, waiting 2s before retrying. Retry 7 of 10
[WARN] [klar] [2020-07-13T17:21:28Z] ▶ Vulnerabilities database not ready, waiting 2s before retrying. Retry 8 of 10
[WARN] [klar] [2020-07-13T17:21:30Z] ▶ Vulnerabilities database not ready, waiting 2s before retrying. Retry 9 of 10
[WARN] [klar] [2020-07-13T17:21:32Z] ▶ Vulnerabilities database not ready, waiting 2s before retrying. Retry 10 of 10
[FATA] [klar] [2020-07-13T17:21:34Z] ▶ error while waiting for vulnerabilities database to start. Giving up after 10 retries.: dial tcp: lookup arminc-clair-db on 169.254.169.254:53: no such host
ERROR: Job failed: exit code 1

Weavescope on Microk8s doesn't recognize containers

I'm running a Microk8s single-node cluster and just installed Weavescope, however it doesn't recognize any containers running. I can see my pods and services though fine, however each pod simply states "0 containers" underneath.
logs from the weavescope agent and app pods indicate that something is very wrong, but I'm not adept enough with Kubernetes to know how to deal with the errors.
Logs from Weavescope agent:
microk8s.kubectl logs -n weave weave-scope-cluster-agent-7944c858c9-bszjw
time="2020-05-23T14:56:10Z" level=info msg="publishing to: weave-scope-app.weave.svc.cluster.local:80"
<probe> INFO: 2020/05/23 14:56:10.378586 Basic authentication disabled
<probe> INFO: 2020/05/23 14:56:10.439179 command line args: --mode=probe --probe-only=true --probe.http.listen=:4041 --probe.kubernetes.role=cluster --probe.publish.interval=4.5s --probe.spy.interval=2s weave-scope-app.weave.svc.cluster.local:80
<probe> INFO: 2020/05/23 14:56:10.439215 probe starting, version 1.13.1, ID 6336ff46bcd86913
<probe> ERRO: 2020/05/23 14:56:10.439261 Error getting docker bridge ip: route ip+net: no such network interface
<probe> INFO: 2020/05/23 14:56:10.439487 kubernetes: targeting api server https://10.152.183.1:443
<probe> ERRO: 2020/05/23 14:56:10.440206 plugins: problem loading: no such file or directory
<probe> INFO: 2020/05/23 14:56:10.444345 Profiling data being exported to :4041
<probe> INFO: 2020/05/23 14:56:10.444355 go tool pprof http://:4041/debug/pprof/{profile,heap,block}
<probe> WARN: 2020/05/23 14:56:10.444505 Error collecting weave status, backing off 10s: Get http://127.0.0.1:6784/report: dial tcp 127.0.0.1:6784: connect: connection refused. If you are not running Weave Net, you may wish to suppress this warning by launching scope with the `--weave=false` option.
<probe> INFO: 2020/05/23 14:56:10.506596 volumesnapshotdatas are not supported by this Kubernetes version
<probe> INFO: 2020/05/23 14:56:10.506950 volumesnapshots are not supported by this Kubernetes version
<probe> INFO: 2020/05/23 14:56:11.559811 Control connection to weave-scope-app.weave.svc.cluster.local starting
<probe> INFO: 2020/05/23 14:56:14.948382 Publish loop for weave-scope-app.weave.svc.cluster.local starting
<probe> WARN: 2020/05/23 14:56:20.447578 Error collecting weave status, backing off 20s: Get http://127.0.0.1:6784/report: dial tcp 127.0.0.1:6784: connect: connection refused. If you are not running Weave Net, you may wish to suppress this warning by launching scope with the `--weave=false` option.
<probe> WARN: 2020/05/23 14:56:40.451421 Error collecting weave status, backing off 40s: Get http://127.0.0.1:6784/report: dial tcp 127.0.0.1:6784: connect: connection refused. If you are not running Weave Net, you may wish to suppress this warning by launching scope with the `--weave=false` option.
<probe> INFO: 2020/05/23 15:19:12.825869 Pipe pipe-7287306037502507515 connection to weave-scope-app.weave.svc.cluster.local starting
<probe> INFO: 2020/05/23 15:19:16.509232 Pipe pipe-7287306037502507515 connection to weave-scope-app.weave.svc.cluster.local exiting
logs from Weavescope app:
microk8s.kubectl logs -n weave weave-scope-app-bc7444d59-csxjd
<app> INFO: 2020/05/23 14:56:11.221084 app starting, version 1.13.1, ID 5e3953d1209f7147
<app> INFO: 2020/05/23 14:56:11.221114 command line args: --mode=app
<app> INFO: 2020/05/23 14:56:11.275231 Basic authentication disabled
<app> INFO: 2020/05/23 14:56:11.290717 listening on :4040
<app> WARN: 2020/05/23 14:56:11.340182 Error updating weaveDNS, backing off 20s: Error running weave ps: exit status 1: "Link not found\n". If you are not running Weave Net, you may wish to suppress this warning by launching scope with the `--weave=false` option.
<app> WARN: 2020/05/23 14:56:31.457702 Error updating weaveDNS, backing off 40s: Error running weave ps: exit status 1: "Link not found\n". If you are not running Weave Net, you may wish to suppress this warning by launching scope with the `--weave=false` option.
<app> ERRO: 2020/05/23 15:19:16.504169 Error copying to pipe pipe-7287306037502507515 (1) websocket: io: read/write on closed pipe

prometheus 2.0.0 error msg="Opening storage failed" err="open DB in /home/prometheus: Lockfile created, but doesn't exist"

Context : Trying to use prometheus on k8s 1.10.2 and using azure file storage as persistent storage medium
Problem : using azurefile storage with prometheus gives me the following error :
level=info ts=2018-06-29T11:08:50.603235475Z caller=main.go:215 msg="Starting Prometheus" version="(version=2.0.0, branch=HEAD, revision=0a74f98628a0463dddc90528220c94de5032d1a0)"
level=info ts=2018-06-29T11:08:50.603302775Z caller=main.go:216 build_context="(go=go1.9.2, user=root#615b82cb36b6, date=20171108-07:11:59)"
level=info ts=2018-06-29T11:08:50.603341576Z caller=main.go:217 host_details="(Linux 4.15.0-1013-azure #13~16.04.2-Ubuntu SMP Wed May 30 01:39:27 UTC 2018 x86_64 prometheus-84f89cd668-r8p5r (none))"
level=info ts=2018-06-29T11:08:50.605677083Z caller=web.go:380 component=web msg="Start listening for connections" address=0.0.0.0:9090
level=info ts=2018-06-29T11:08:50.605759983Z caller=main.go:314 msg="Starting TSDB"
level=info ts=2018-06-29T11:08:50.605816483Z caller=targetmanager.go:71 component="target manager" msg="Starting target manager..."
level=error ts=2018-06-29T11:08:50.778059828Z caller=main.go:323 msg="Opening storage failed" err="open DB in /home/prometheus: Lockfile created, but doesn't exist"
Note : I do not want to use the --storage.tsdb.no-lockfile flag on the prometheus deploymenet
Is there any other way i can fix this issue?
Thanks for any input on how to fix this issue.

Mysql monitoring using Prometheus and Grafana

I need to monitor Mysql using Prometheus and Grafana;
Added mysqld exporter in the client server
while starting the mysqld in the client server it reports as :
time="2017-09-05T11:42:53Z" level=info msg="Error scraping slave state: dial tcp [::1]:3306: getsockopt: connection refused" file="mysqld_exporter.go" line=824
time="2017-09-05T11:42:53Z" level=info msg="Error scraping table schema: dial tcp [::1]:3306: getsockopt: connection refused" file="mysqld_exporter.go" line=836
In client server :
I have downloaded the mysqld exporter
Extracted it
Created .my.cnf file
[client]
user=prom
password=abc123
Given permission to that user by using this command
mysql> GRANT REPLICATION CLIENT, PROCESS ON . TO 'prom'#'localhost' identified by 'abc123';
mysql> GRANT SELECT ON performance_schema.* TO 'prom'#'localhost';
Executed this command
./mysqld_exporter -config.my-cnf=".my.cnf"
Prometheus status page shows as UP condition for this.
In Grafana no data is loaded for MySQL