How to communicate with a gitlab service container - postgresql

I have the following .gitlab-ci.yml file:
stages:
- scan
scanning:
stage: scan
image: docker:19.03.6
services:
- name: arminc/clair-db:latest
- name: docker:dind
before_script:
- docker login -u $DOCKER_USERNAME -p $DOCKER_PASSWORD
script:
- export LOCAL_MACHINE_IP_ADDRESS=arminc-clair-db
- ping -c 4 $LOCAL_MACHINE_IP_ADDRESS:5432 #Pinging 'arminc-clair-db:5432' to prove that it IS accessible
- docker run --interactive --rm --volume "$PWD":/tmp/app -e CI_PROJECT_DIR=/tmp/app -e CLAIR_DB_CONNECTION_STRING="postgresql://postgres:password#${LOCAL_MACHINE_IP_ADDRESS}:5432/postgres?sslmode=disable&statement_timeout=60000" -e CI_APPLICATION_REPOSITORY=vismarkjuarez1994/codigo-initiative -e CI_APPLICATION_TAG=latest registry.gitlab.com/gitlab-org/security-products/analyzers/klar
Everything runs just fine until the last script command, because the host arminc-clair-db:5432 (which is a service) cannot be resolved. How do I get my docker container to "see" and communicate with the arminc/clair-db container?
Below are all the output logs, with the error at the bottom:
Running with gitlab-runner 13.1.0 (6214287e)
on docker-auto-scale fa6cab46
Preparing the "docker+machine" executor
01:18
Using Docker executor with image docker:19.03.6 ...
Starting service arminc/clair-db:latest ...
Pulling docker image arminc/clair-db:latest ...
Using docker image sha256:032e46f9e42c3f26280ed984de737e5d3d1ca99bb641414b13226c6c62556feb for arminc/clair-db:latest ...
Starting service docker:dind ...
Pulling docker image docker:dind ...
Using docker image sha256:d5d139be840a6ffa04348fc87740e8c095cade6e9cb977785fdba51e5fd7ffec for docker:dind ...
Waiting for services to be up and running...
*** WARNING: Service runner-fa6cab46-project-19334692-concurrent-0-f3fcf99fb2cfbb7e-docker-1 probably didn't start properly.
Health check error:
service "runner-fa6cab46-project-19334692-concurrent-0-f3fcf99fb2cfbb7e-docker-1-wait-for-service" timeout
Health check container logs:
Service container logs:
2020-07-13T17:20:21.309966089Z time="2020-07-13T17:20:21.294373440Z" level=info msg="Starting up"
2020-07-13T17:20:21.310002707Z time="2020-07-13T17:20:21.300572503Z" level=warning msg="could not change group /var/run/docker.sock to docker: group docker not found"
2020-07-13T17:20:21.310007147Z time="2020-07-13T17:20:21.302510800Z" level=warning msg="[!] DON'T BIND ON ANY IP ADDRESS WITHOUT setting --tlsverify IF YOU DON'T KNOW WHAT YOU'RE DOING [!]"
2020-07-13T17:20:21.310010666Z time="2020-07-13T17:20:21.307985849Z" level=info msg="libcontainerd: started new containerd process" pid=18
2020-07-13T17:20:21.320081834Z time="2020-07-13T17:20:21.312042380Z" level=info msg="parsed scheme: \"unix\"" module=grpc
2020-07-13T17:20:21.320098887Z time="2020-07-13T17:20:21.312066080Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
2020-07-13T17:20:21.320103371Z time="2020-07-13T17:20:21.312091196Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0 <nil>}] <nil>}" module=grpc
2020-07-13T17:20:21.320107440Z time="2020-07-13T17:20:21.312100727Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
2020-07-13T17:20:21.409230132Z time="2020-07-13T17:20:21.365046801Z" level=info msg="starting containerd" revision=7ad184331fa3e55e52b890ea95e65ba581ae3429 version=v1.2.13
2020-07-13T17:20:21.409258920Z time="2020-07-13T17:20:21.380378683Z" level=info msg="loading plugin "io.containerd.content.v1.content"..." type=io.containerd.content.v1
2020-07-13T17:20:21.409263750Z time="2020-07-13T17:20:21.380477131Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.btrfs"..." type=io.containerd.snapshotter.v1
2020-07-13T17:20:21.409268507Z time="2020-07-13T17:20:21.380709522Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.btrfs" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter"
2020-07-13T17:20:21.409274034Z time="2020-07-13T17:20:21.380721072Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.aufs"..." type=io.containerd.snapshotter.v1
2020-07-13T17:20:21.409277876Z time="2020-07-13T17:20:21.401607600Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.aufs" error="modprobe aufs failed: "ip: can't find device 'aufs'\nmodprobe: can't change directory to '/lib/modules': No such file or directory\n": exit status 1"
2020-07-13T17:20:21.409282127Z time="2020-07-13T17:20:21.401634659Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.native"..." type=io.containerd.snapshotter.v1
2020-07-13T17:20:21.409286166Z time="2020-07-13T17:20:21.401762230Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.overlayfs"..." type=io.containerd.snapshotter.v1
2020-07-13T17:20:21.409290021Z time="2020-07-13T17:20:21.402131753Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.zfs"..." type=io.containerd.snapshotter.v1
2020-07-13T17:20:21.409293511Z time="2020-07-13T17:20:21.402413697Z" level=info msg="skip loading plugin "io.containerd.snapshotter.v1.zfs"..." type=io.containerd.snapshotter.v1
2020-07-13T17:20:21.409296992Z time="2020-07-13T17:20:21.402424580Z" level=info msg="loading plugin "io.containerd.metadata.v1.bolt"..." type=io.containerd.metadata.v1
2020-07-13T17:20:21.409309649Z time="2020-07-13T17:20:21.402470351Z" level=warning msg="could not use snapshotter btrfs in metadata plugin" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter"
2020-07-13T17:20:21.409313855Z time="2020-07-13T17:20:21.402491305Z" level=warning msg="could not use snapshotter aufs in metadata plugin" error="modprobe aufs failed: "ip: can't find device 'aufs'\nmodprobe: can't change directory to '/lib/modules': No such file or directory\n": exit status 1"
2020-07-13T17:20:21.409317643Z time="2020-07-13T17:20:21.402498473Z" level=warning msg="could not use snapshotter zfs in metadata plugin" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin"
2020-07-13T17:20:21.422337155Z time="2020-07-13T17:20:21.415144189Z" level=info msg="loading plugin "io.containerd.differ.v1.walking"..." type=io.containerd.differ.v1
2020-07-13T17:20:21.422366287Z time="2020-07-13T17:20:21.415177490Z" level=info msg="loading plugin "io.containerd.gc.v1.scheduler"..." type=io.containerd.gc.v1
2020-07-13T17:20:21.422370656Z time="2020-07-13T17:20:21.415220797Z" level=info msg="loading plugin "io.containerd.service.v1.containers-service"..." type=io.containerd.service.v1
2020-07-13T17:20:21.422374289Z time="2020-07-13T17:20:21.415238800Z" level=info msg="loading plugin "io.containerd.service.v1.content-service"..." type=io.containerd.service.v1
2020-07-13T17:20:21.422377575Z time="2020-07-13T17:20:21.415249745Z" level=info msg="loading plugin "io.containerd.service.v1.diff-service"..." type=io.containerd.service.v1
2020-07-13T17:20:21.422380973Z time="2020-07-13T17:20:21.415261483Z" level=info msg="loading plugin "io.containerd.service.v1.images-service"..." type=io.containerd.service.v1
2020-07-13T17:20:21.422384239Z time="2020-07-13T17:20:21.415285048Z" level=info msg="loading plugin "io.containerd.service.v1.leases-service"..." type=io.containerd.service.v1
2020-07-13T17:20:21.422387513Z time="2020-07-13T17:20:21.415296352Z" level=info msg="loading plugin "io.containerd.service.v1.namespaces-service"..." type=io.containerd.service.v1
2020-07-13T17:20:21.422390735Z time="2020-07-13T17:20:21.415306615Z" level=info msg="loading plugin "io.containerd.service.v1.snapshots-service"..." type=io.containerd.service.v1
2020-07-13T17:20:21.422393948Z time="2020-07-13T17:20:21.415317274Z" level=info msg="loading plugin "io.containerd.runtime.v1.linux"..." type=io.containerd.runtime.v1
2020-07-13T17:20:21.422397113Z time="2020-07-13T17:20:21.415539917Z" level=info msg="loading plugin "io.containerd.runtime.v2.task"..." type=io.containerd.runtime.v2
2020-07-13T17:20:21.422400411Z time="2020-07-13T17:20:21.415661653Z" level=info msg="loading plugin "io.containerd.monitor.v1.cgroups"..." type=io.containerd.monitor.v1
2020-07-13T17:20:21.422403653Z time="2020-07-13T17:20:21.416302031Z" level=info msg="loading plugin "io.containerd.service.v1.tasks-service"..." type=io.containerd.service.v1
2020-07-13T17:20:21.422406821Z time="2020-07-13T17:20:21.416331092Z" level=info msg="loading plugin "io.containerd.internal.v1.restart"..." type=io.containerd.internal.v1
2020-07-13T17:20:21.422421577Z time="2020-07-13T17:20:21.416366348Z" level=info msg="loading plugin "io.containerd.grpc.v1.containers"..." type=io.containerd.grpc.v1
2020-07-13T17:20:21.422425309Z time="2020-07-13T17:20:21.416378191Z" level=info msg="loading plugin "io.containerd.grpc.v1.content"..." type=io.containerd.grpc.v1
2020-07-13T17:20:21.422428496Z time="2020-07-13T17:20:21.416388907Z" level=info msg="loading plugin "io.containerd.grpc.v1.diff"..." type=io.containerd.grpc.v1
2020-07-13T17:20:21.422431715Z time="2020-07-13T17:20:21.416399419Z" level=info msg="loading plugin "io.containerd.grpc.v1.events"..." type=io.containerd.grpc.v1
2020-07-13T17:20:21.422434873Z time="2020-07-13T17:20:21.416410316Z" level=info msg="loading plugin "io.containerd.grpc.v1.healthcheck"..." type=io.containerd.grpc.v1
2020-07-13T17:20:21.422438168Z time="2020-07-13T17:20:21.416421136Z" level=info msg="loading plugin "io.containerd.grpc.v1.images"..." type=io.containerd.grpc.v1
2020-07-13T17:20:21.422441391Z time="2020-07-13T17:20:21.416431148Z" level=info msg="loading plugin "io.containerd.grpc.v1.leases"..." type=io.containerd.grpc.v1
2020-07-13T17:20:21.422444483Z time="2020-07-13T17:20:21.416441183Z" level=info msg="loading plugin "io.containerd.grpc.v1.namespaces"..." type=io.containerd.grpc.v1
2020-07-13T17:20:21.422447730Z time="2020-07-13T17:20:21.416450302Z" level=info msg="loading plugin "io.containerd.internal.v1.opt"..." type=io.containerd.internal.v1
2020-07-13T17:20:21.422450979Z time="2020-07-13T17:20:21.416686499Z" level=info msg="loading plugin "io.containerd.grpc.v1.snapshots"..." type=io.containerd.grpc.v1
2020-07-13T17:20:21.422454181Z time="2020-07-13T17:20:21.416699740Z" level=info msg="loading plugin "io.containerd.grpc.v1.tasks"..." type=io.containerd.grpc.v1
2020-07-13T17:20:21.422457333Z time="2020-07-13T17:20:21.416710157Z" level=info msg="loading plugin "io.containerd.grpc.v1.version"..." type=io.containerd.grpc.v1
2020-07-13T17:20:21.422460460Z time="2020-07-13T17:20:21.416720291Z" level=info msg="loading plugin "io.containerd.grpc.v1.introspection"..." type=io.containerd.grpc.v1
2020-07-13T17:20:21.422463733Z time="2020-07-13T17:20:21.417000522Z" level=info msg=serving... address="/var/run/docker/containerd/containerd-debug.sock"
2020-07-13T17:20:21.422466909Z time="2020-07-13T17:20:21.417067077Z" level=info msg=serving... address="/var/run/docker/containerd/containerd.sock"
2020-07-13T17:20:21.422470029Z time="2020-07-13T17:20:21.417076538Z" level=info msg="containerd successfully booted in 0.053678s"
2020-07-13T17:20:21.522992752Z time="2020-07-13T17:20:21.445278000Z" level=info msg="Setting the storage driver from the $DOCKER_DRIVER environment variable (overlay2)"
2020-07-13T17:20:21.523023868Z time="2020-07-13T17:20:21.445485273Z" level=info msg="parsed scheme: \"unix\"" module=grpc
2020-07-13T17:20:21.523028739Z time="2020-07-13T17:20:21.445497879Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
2020-07-13T17:20:21.523032708Z time="2020-07-13T17:20:21.445515130Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0 <nil>}] <nil>}" module=grpc
2020-07-13T17:20:21.523047388Z time="2020-07-13T17:20:21.445523771Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
2020-07-13T17:20:21.523051107Z time="2020-07-13T17:20:21.446951417Z" level=info msg="parsed scheme: \"unix\"" module=grpc
2020-07-13T17:20:21.523054385Z time="2020-07-13T17:20:21.446967002Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
2020-07-13T17:20:21.523058180Z time="2020-07-13T17:20:21.446982738Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0 <nil>}] <nil>}" module=grpc
2020-07-13T17:20:21.523061840Z time="2020-07-13T17:20:21.446991531Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
2020-07-13T17:20:21.523065052Z time="2020-07-13T17:20:21.499800243Z" level=info msg="Loading containers: start."
2020-07-13T17:20:21.529844429Z time="2020-07-13T17:20:21.527114868Z" level=warning msg="Running modprobe bridge br_netfilter failed with message: ip: can't find device 'bridge'\nbridge 167936 1 br_netfilter\nstp 16384 1 bridge\nllc 16384 2 bridge,stp\nip: can't find device 'br_netfilter'\nbr_netfilter 24576 0 \nbridge 167936 1 br_netfilter\nmodprobe: can't change directory to '/lib/modules': No such file or directory\n, error: exit status 1"
2020-07-13T17:20:21.680323559Z time="2020-07-13T17:20:21.642534057Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.18.0.0/16. Daemon option --bip can be used to set a preferred IP address"
2020-07-13T17:20:21.715621265Z time="2020-07-13T17:20:21.712210528Z" level=info msg="Loading containers: done."
2020-07-13T17:20:21.741819560Z time="2020-07-13T17:20:21.740973136Z" level=info msg="Docker daemon" commit=48a66213fe graphdriver(s)=overlay2 version=19.03.12
2020-07-13T17:20:21.743421190Z time="2020-07-13T17:20:21.741147945Z" level=info msg="Daemon has completed initialization"
2020-07-13T17:20:21.858939063Z time="2020-07-13T17:20:21.849737479Z" level=info msg="API listen on [::]:2375"
2020-07-13T17:20:21.859149575Z time="2020-07-13T17:20:21.859084136Z" level=info msg="API listen on /var/run/docker.sock"
*********
Pulling docker image docker:19.03.6 ...
Using docker image sha256:6512892b576811235f68a6dcd5fbe10b387ac0ba3709aeaf80cd5cfcecb387c7 for docker:19.03.6 ...
Preparing environment
00:03
Running on runner-fa6cab46-project-19334692-concurrent-0 via runner-fa6cab46-srm-1594660737-ea79a1df...
Getting source from Git repository
00:04
$ eval "$CI_PRE_CLONE_SCRIPT"
Fetching changes with git depth set to 50...
Initialized empty Git repository in /builds/vismarkjuarez/car-assembly-line/.git/
Created fresh repository.
Checking out 71a14f15 as master...
Skipping Git submodules setup
Executing "step_script" stage of the job script
00:27
$ docker login -u $DOCKER_USERNAME -p $DOCKER_PASSWORD
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded
$ export LOCAL_MACHINE_IP_ADDRESS=arminc-clair-db
$ ping -c 4 $LOCAL_MACHINE_IP_ADDRESS:5432
PING arminc-clair-db:5432 (172.17.0.3): 56 data bytes
64 bytes from 172.17.0.3: seq=0 ttl=64 time=0.096 ms
64 bytes from 172.17.0.3: seq=1 ttl=64 time=0.086 ms
64 bytes from 172.17.0.3: seq=2 ttl=64 time=0.079 ms
64 bytes from 172.17.0.3: seq=3 ttl=64 time=0.081 ms
--- arminc-clair-db:5432 ping statistics ---
4 packets transmitted, 4 packets received, 0% packet loss
round-trip min/avg/max = 0.079/0.085/0.096 ms
$ docker run --interactive --rm --volume "$PWD":/tmp/app -e CI_PROJECT_DIR=/tmp/app -e CLAIR_DB_CONNECTION_STRING="postgresql://postgres:password#${LOCAL_MACHINE_IP_ADDRESS}:5432/postgres?sslmode=disable&statement_timeout=60000" -e CI_APPLICATION_REPOSITORY=[MASKED]/codigo-initiative -e CI_APPLICATION_TAG=latest registry.gitlab.com/gitlab-org/security-products/analyzers/klar
Unable to find image 'registry.gitlab.com/gitlab-org/security-products/analyzers/klar:latest' locally
latest: Pulling from gitlab-org/security-products/analyzers/klar
c9b1b535fdd9: Pulling fs layer
9d4a5a860853: Pulling fs layer
f02644185e38: Pulling fs layer
01f6d8f93d4f: Pulling fs layer
6756a56563fe: Pulling fs layer
01f6d8f93d4f: Waiting
6756a56563fe: Waiting
9d4a5a860853: Verifying Checksum
9d4a5a860853: Download complete
01f6d8f93d4f: Verifying Checksum
01f6d8f93d4f: Download complete
c9b1b535fdd9: Verifying Checksum
c9b1b535fdd9: Download complete
f02644185e38: Verifying Checksum
f02644185e38: Download complete
6756a56563fe: Verifying Checksum
6756a56563fe: Download complete
c9b1b535fdd9: Pull complete
9d4a5a860853: Pull complete
f02644185e38: Pull complete
01f6d8f93d4f: Pull complete
6756a56563fe: Pull complete
Digest: sha256:229558a024e5a1c6ca5b1ed67bc13f6eeca15d4cd63c278ef6c3efa357630bde
Status: Downloaded newer image for registry.gitlab.com/gitlab-org/security-products/analyzers/klar:latest
[INFO] [klar] [2020-07-13T17:21:13Z] ▶ GitLab klar analyzer v2.4.8
[WARN] [klar] [2020-07-13T17:21:13Z] ▶ Allowlist file with path '/tmp/app/clair-whitelist.yml' does not exist, skipping
[WARN] [klar] [2020-07-13T17:21:13Z] ▶ Allowlist file with path '/tmp/app/vulnerability-allowlist.yml' does not exist, skipping
[INFO] [klar] [2020-07-13T17:21:13Z] ▶ DOCKER_USER and DOCKER_PASSWORD environment variables have not been configured. Defaulting to DOCKER_USER=$CI_REGISTRY_USER and DOCKER_PASSWORD=$CI_REGISTRY_PASSWORD
[WARN] [klar] [2020-07-13T17:21:14Z] ▶ Vulnerabilities database not ready, waiting 2s before retrying. Retry 1 of 10
[WARN] [klar] [2020-07-13T17:21:16Z] ▶ Vulnerabilities database not ready, waiting 2s before retrying. Retry 2 of 10
[WARN] [klar] [2020-07-13T17:21:18Z] ▶ Vulnerabilities database not ready, waiting 2s before retrying. Retry 3 of 10
[WARN] [klar] [2020-07-13T17:21:20Z] ▶ Vulnerabilities database not ready, waiting 2s before retrying. Retry 4 of 10
[WARN] [klar] [2020-07-13T17:21:22Z] ▶ Vulnerabilities database not ready, waiting 2s before retrying. Retry 5 of 10
[WARN] [klar] [2020-07-13T17:21:24Z] ▶ Vulnerabilities database not ready, waiting 2s before retrying. Retry 6 of 10
[WARN] [klar] [2020-07-13T17:21:26Z] ▶ Vulnerabilities database not ready, waiting 2s before retrying. Retry 7 of 10
[WARN] [klar] [2020-07-13T17:21:28Z] ▶ Vulnerabilities database not ready, waiting 2s before retrying. Retry 8 of 10
[WARN] [klar] [2020-07-13T17:21:30Z] ▶ Vulnerabilities database not ready, waiting 2s before retrying. Retry 9 of 10
[WARN] [klar] [2020-07-13T17:21:32Z] ▶ Vulnerabilities database not ready, waiting 2s before retrying. Retry 10 of 10
[FATA] [klar] [2020-07-13T17:21:34Z] ▶ error while waiting for vulnerabilities database to start. Giving up after 10 retries.: dial tcp: lookup arminc-clair-db on 169.254.169.254:53: no such host
ERROR: Job failed: exit code 1

Related

How to show custom Grafana plugin in Grafana dashboard correctly?

I am trying to create a Grafana plugin by #grafana/create-plugin.
Based on the readme, first, I generated a plugin by
➜ npx #grafana/create-plugin
? What is going to be the name of your plugin? my-panel-plugin
? What is the organization name of your plugin? hongbomiao
? How would you describe your plugin?
? What kind of plugin would you like? panel
? Do you want to add Github CI and Release workflows? No
? Do you want to add a Github workflow for automatically checking "Grafana API compatibility" on PRs? No
✔ ++ /Users/hongbo-miao/Clouds/Git/new-grafana-plugins/hongbomiao-mypanelplugin-panel/.config/.eslintrc
✔ ++ /Users/hongbo-miao/Clouds/Git/new-grafana-plugins/hongbomiao-mypanelplugin-panel/.config/.prettierrc.js
✔ ++ /Users/hongbo-miao/Clouds/Git/new-grafana-plugins/hongbomiao-mypanelplugin-panel/.config/Dockerfile
✔ ++ /Users/hongbo-miao/Clouds/Git/new-grafana-plugins/hongbomiao-mypanelplugin-panel/.config/jest-setup.js
✔ ++ /Users/hongbo-miao/Clouds/Git/new-grafana-plugins/hongbomiao-mypanelplugin-panel/.config/jest.config.js
✔ ++ /Users/hongbo-miao/Clouds/Git/new-grafana-plugins/hongbomiao-mypanelplugin-panel/.config/mocks/react-inlinesvg.tsx
✔ ++ /Users/hongbo-miao/Clouds/Git/new-grafana-plugins/hongbomiao-mypanelplugin-panel/.config/README.md
✔ ++ /Users/hongbo-miao/Clouds/Git/new-grafana-plugins/hongbomiao-mypanelplugin-panel/.config/tsconfig.json
✔ ++ /Users/hongbo-miao/Clouds/Git/new-grafana-plugins/hongbomiao-mypanelplugin-panel/.config/types/custom.d.ts
✔ ++ /Users/hongbo-miao/Clouds/Git/new-grafana-plugins/hongbomiao-mypanelplugin-panel/.config/webpack/constants.ts
✔ ++ /Users/hongbo-miao/Clouds/Git/new-grafana-plugins/hongbomiao-mypanelplugin-panel/.config/webpack/utils.ts
✔ ++ /Users/hongbo-miao/Clouds/Git/new-grafana-plugins/hongbomiao-mypanelplugin-panel/.config/webpack/webpack.config.ts
✔ ++ /Users/hongbo-miao/Clouds/Git/new-grafana-plugins/hongbomiao-mypanelplugin-panel/.eslintrc
✔ ++ /Users/hongbo-miao/Clouds/Git/new-grafana-plugins/hongbomiao-mypanelplugin-panel/.nvmrc
✔ ++ /Users/hongbo-miao/Clouds/Git/new-grafana-plugins/hongbomiao-mypanelplugin-panel/.prettierrc.js
✔ ++ /Users/hongbo-miao/Clouds/Git/new-grafana-plugins/hongbomiao-mypanelplugin-panel/CHANGELOG.md
✔ ++ /Users/hongbo-miao/Clouds/Git/new-grafana-plugins/hongbomiao-mypanelplugin-panel/cypress/integration/01-smoke.spec.ts
✔ ++ /Users/hongbo-miao/Clouds/Git/new-grafana-plugins/hongbomiao-mypanelplugin-panel/docker-compose.yaml
✔ ++ /Users/hongbo-miao/Clouds/Git/new-grafana-plugins/hongbomiao-mypanelplugin-panel/.gitignore
✔ ++ /Users/hongbo-miao/Clouds/Git/new-grafana-plugins/hongbomiao-mypanelplugin-panel/jest-setup.js
✔ ++ /Users/hongbo-miao/Clouds/Git/new-grafana-plugins/hongbomiao-mypanelplugin-panel/jest.config.js
✔ ++ /Users/hongbo-miao/Clouds/Git/new-grafana-plugins/hongbomiao-mypanelplugin-panel/LICENSE
✔ ++ /Users/hongbo-miao/Clouds/Git/new-grafana-plugins/hongbomiao-mypanelplugin-panel/package.json
✔ ++ /Users/hongbo-miao/Clouds/Git/new-grafana-plugins/hongbomiao-mypanelplugin-panel/src/img/logo.svg
✔ ++ /Users/hongbo-miao/Clouds/Git/new-grafana-plugins/hongbomiao-mypanelplugin-panel/src/README.md
✔ ++ /Users/hongbo-miao/Clouds/Git/new-grafana-plugins/hongbomiao-mypanelplugin-panel/tsconfig.json
✔ ++ /Users/hongbo-miao/Clouds/Git/new-grafana-plugins/hongbomiao-mypanelplugin-panel/README.md
✔ ++ /Users/hongbo-miao/Clouds/Git/new-grafana-plugins/hongbomiao-mypanelplugin-panel/src/components/SimplePanel.tsx
✔ ++ /Users/hongbo-miao/Clouds/Git/new-grafana-plugins/hongbomiao-mypanelplugin-panel/src/module.test.ts
✔ ++ /Users/hongbo-miao/Clouds/Git/new-grafana-plugins/hongbomiao-mypanelplugin-panel/src/module.ts
✔ ++ /Users/hongbo-miao/Clouds/Git/new-grafana-plugins/hongbomiao-mypanelplugin-panel/src/plugin.json
✔ ++ /Users/hongbo-miao/Clouds/Git/new-grafana-plugins/hongbomiao-mypanelplugin-panel/src/types.ts
✔ +- /Users/hongbo-miao/Clouds/Git/new-grafana-plugins/hongbomiao-mypanelplugin-panel/README.md
✔ +- /Users/hongbo-miao/Clouds/Git/new-grafana-plugins/hongbomiao-mypanelplugin-panel/README.md
Then I ran yarn install with
➜ yarn dev
yarn run v1.22.19
$ webpack -w -c ./.config/webpack/webpack.config.ts --env development
<e> [LiveReloadPlugin] Live Reload disabled: listen EADDRINUSE: address already in use :::35729
assets by path *.md 183 bytes
asset README.md 131 bytes [emitted] [from: README.md] [copied]
asset CHANGELOG.md 52 bytes [emitted] [from: ../CHANGELOG.md] [copied]
asset module.js 159 KiB [emitted] (name: module)
asset LICENSE 11.1 KiB [emitted] [from: ../LICENSE] [copied]
asset img/logo.svg 1.55 KiB [emitted] [from: img/logo.svg] [copied]
asset plugin.json 891 bytes [emitted] [from: plugin.json] [copied]
runtime modules 1.25 KiB 6 modules
modules by path ../node_modules/lodash/*.js 32 KiB
../node_modules/lodash/defaults.js 1.71 KiB [built] [code generated]
../node_modules/lodash/_baseRest.js 559 bytes [built] [code generated]
../node_modules/lodash/eq.js 799 bytes [built] [code generated]
+ 42 modules
modules by path ./ 10.5 KiB
modules by path ./*.ts 2.92 KiB 3 modules
modules by path ./components/*.tsx 7.63 KiB
./components/ConfigEditor.tsx 4.38 KiB [built] [code generated]
./components/QueryEditor.tsx 3.25 KiB [built] [code generated]
modules by path external "#grafana/ 84 bytes
external "#grafana/data" 42 bytes [built] [code generated]
external "#grafana/ui" 42 bytes [built] [code generated]
external "react" 42 bytes [built] [code generated]
webpack 5.74.0 compiled successfully in 396 ms
Type-checking in progress...
assets by status 173 KiB [cached] 6 assets
cached modules 42.7 KiB (javascript) 1.25 KiB (runtime) [cached] 59 modules
webpack 5.74.0 compiled successfully in 196 ms
Type-checking in progress...
No errors found.
After that, I opened a new terminal and ran yarn server:
➜ yarn server
yarn run v1.22.19
$ docker-compose up --build
[+] Building 1.3s (7/7) FINISHED
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 755B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for docker.io/grafana/grafana:9.1.2 1.2s
=> [auth] grafana/grafana:pull token for registry-1.docker.io 0.0s
=> [1/2] FROM docker.io/grafana/grafana:9.1.2#sha256:980ff2697655a0aa5718e40bbda6ac52299d2f3b1584d0081152e2d0a4742078 0.0s
=> CACHED [2/2] RUN sed -i 's/<\/body><\/html>/<script src=\"http:\/\/localhost:35729\/livereload.js\"><\/script><\/body><\/html>/g' /usr/share/grafana/public/views/index.html 0.0s
=> exporting to image 0.0s
=> => exporting layers 0.0s
=> => writing image sha256:f776788e5baad34844582c7991d67ac7b5a4c6ff83b6496af5f0f0815aa58198 0.0s
=> => naming to docker.io/library/hongbomiao-mypanelplugin-panel-grafana 0.0s
Use 'docker scan' to run Snyk tests against images to find vulnerabilities and learn how to fix them
[+] Running 1/0
⠿ Container hongbomiao-mypanelplugin-panel Created 0.0s
Attaching to hongbomiao-mypanelplugin-panel
hongbomiao-mypanelplugin-panel | Grafana server is running with elevated privileges. This is not recommended
hongbomiao-mypanelplugin-panel | logger=settings t=2022-11-02T21:40:41.789836804Z level=info msg="Starting Grafana" version=9.1.2 commit=3c13120cde branch=HEAD compiled=2022-08-30T11:31:21Z
hongbomiao-mypanelplugin-panel | logger=settings t=2022-11-02T21:40:41.789958054Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini
hongbomiao-mypanelplugin-panel | logger=settings t=2022-11-02T21:40:41.789978013Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini
hongbomiao-mypanelplugin-panel | logger=settings t=2022-11-02T21:40:41.789981054Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana"
hongbomiao-mypanelplugin-panel | logger=settings t=2022-11-02T21:40:41.789984429Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana"
hongbomiao-mypanelplugin-panel | logger=settings t=2022-11-02T21:40:41.789986554Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins"
hongbomiao-mypanelplugin-panel | logger=settings t=2022-11-02T21:40:41.789988804Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning"
hongbomiao-mypanelplugin-panel | logger=settings t=2022-11-02T21:40:41.789991013Z level=info msg="Config overridden from command line" arg="default.log.mode=console"
hongbomiao-mypanelplugin-panel | logger=settings t=2022-11-02T21:40:41.789993304Z level=info msg="Config overridden from Environment variable" var="GF_DEFAULT_APP_MODE=development"
hongbomiao-mypanelplugin-panel | logger=settings t=2022-11-02T21:40:41.789995596Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana"
hongbomiao-mypanelplugin-panel | logger=settings t=2022-11-02T21:40:41.789997679Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana"
hongbomiao-mypanelplugin-panel | logger=settings t=2022-11-02T21:40:41.789999763Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins"
hongbomiao-mypanelplugin-panel | logger=settings t=2022-11-02T21:40:41.790001888Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning"
hongbomiao-mypanelplugin-panel | logger=settings t=2022-11-02T21:40:41.790004013Z level=info msg="Config overridden from Environment variable" var="GF_AUTH_ANONYMOUS_ENABLED=true"
hongbomiao-mypanelplugin-panel | logger=settings t=2022-11-02T21:40:41.790006263Z level=info msg="Config overridden from Environment variable" var="GF_AUTH_ANONYMOUS_ORG_ROLE=Admin"
hongbomiao-mypanelplugin-panel | logger=settings t=2022-11-02T21:40:41.790008429Z level=info msg="Config overridden from Environment variable" var="GF_AUTH_BASIC_ENABLED=false"
hongbomiao-mypanelplugin-panel | logger=settings t=2022-11-02T21:40:41.790010638Z level=info msg="Path Home" path=/usr/share/grafana
hongbomiao-mypanelplugin-panel | logger=settings t=2022-11-02T21:40:41.790012888Z level=info msg="Path Data" path=/var/lib/grafana
hongbomiao-mypanelplugin-panel | logger=settings t=2022-11-02T21:40:41.790015179Z level=info msg="Path Logs" path=/var/log/grafana
hongbomiao-mypanelplugin-panel | logger=settings t=2022-11-02T21:40:41.790017388Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins
hongbomiao-mypanelplugin-panel | logger=settings t=2022-11-02T21:40:41.790020388Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning
hongbomiao-mypanelplugin-panel | logger=settings t=2022-11-02T21:40:41.790022888Z level=info msg="App mode development"
hongbomiao-mypanelplugin-panel | logger=sqlstore t=2022-11-02T21:40:41.790065388Z level=info msg="Connecting to DB" dbtype=sqlite3
hongbomiao-mypanelplugin-panel | logger=migrator t=2022-11-02T21:40:41.800350013Z level=info msg="Starting DB migrations"
hongbomiao-mypanelplugin-panel | logger=migrator t=2022-11-02T21:40:41.803593388Z level=info msg="migrations completed" performed=0 skipped=443 duration=262.625µs
hongbomiao-mypanelplugin-panel | logger=plugin.manager t=2022-11-02T21:40:41.824215679Z level=info msg="Plugin registered" pluginId=input
hongbomiao-mypanelplugin-panel | logger=plugin.signature.validator t=2022-11-02T21:40:41.844108513Z level=warn msg="Permitting unsigned plugin. This is not recommended" pluginID=hongbomiao-mypanelplugin-panel pluginDir=/var/lib/grafana/plugins/hongbomiao-mypanelplugin-panel
hongbomiao-mypanelplugin-panel | logger=plugin.manager t=2022-11-02T21:40:41.844162429Z level=info msg="Plugin registered" pluginId=hongbomiao-mypanelplugin-panel
hongbomiao-mypanelplugin-panel | logger=secrets t=2022-11-02T21:40:41.844371721Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1
hongbomiao-mypanelplugin-panel | logger=query_data t=2022-11-02T21:40:41.845634221Z level=info msg="Query Service initialization"
hongbomiao-mypanelplugin-panel | logger=live.push_http t=2022-11-02T21:40:41.848155138Z level=info msg="Live Push Gateway initialization"
hongbomiao-mypanelplugin-panel | logger=ticker t=2022-11-02T21:40:41.857981054Z level=info msg=starting first_tick=2022-11-02T21:40:50Z
hongbomiao-mypanelplugin-panel | logger=infra.usagestats.collector t=2022-11-02T21:40:41.892286679Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2
hongbomiao-mypanelplugin-panel | logger=provisioning.datasources t=2022-11-02T21:40:41.897490221Z level=error msg="can't read datasource provisioning files from directory" path=/etc/grafana/provisioning/datasources error="open /etc/grafana/provisioning/datasources: no such file or directory"
hongbomiao-mypanelplugin-panel | logger=provisioning.plugins t=2022-11-02T21:40:41.898617513Z level=error msg="Failed to read plugin provisioning files from directory" path=/etc/grafana/provisioning/plugins error="open /etc/grafana/provisioning/plugins: no such file or directory"
hongbomiao-mypanelplugin-panel | logger=provisioning.notifiers t=2022-11-02T21:40:41.900185554Z level=error msg="Can't read alert notification provisioning files from directory" path=/etc/grafana/provisioning/notifiers error="open /etc/grafana/provisioning/notifiers: no such file or directory"
hongbomiao-mypanelplugin-panel | logger=provisioning.alerting t=2022-11-02T21:40:41.901483513Z level=error msg="can't read alerting provisioning files from directory" path=/etc/grafana/provisioning/alerting error="open /etc/grafana/provisioning/alerting: no such file or directory"
hongbomiao-mypanelplugin-panel | logger=provisioning.alerting t=2022-11-02T21:40:41.901517929Z level=info msg="starting to provision alerting"
hongbomiao-mypanelplugin-panel | logger=provisioning.alerting t=2022-11-02T21:40:41.901523679Z level=info msg="finished to provision alerting"
hongbomiao-mypanelplugin-panel | logger=grafanaStorageLogger t=2022-11-02T21:40:41.902017471Z level=info msg="storage starting"
hongbomiao-mypanelplugin-panel | logger=ngalert t=2022-11-02T21:40:41.903549763Z level=info msg="warming cache for startup"
hongbomiao-mypanelplugin-panel | logger=http.server t=2022-11-02T21:40:41.904667888Z level=info msg="HTTP Server Listen" address=[::]:3000 protocol=http subUrl= socket=
hongbomiao-mypanelplugin-panel | logger=provisioning.dashboard t=2022-11-02T21:40:41.904551679Z level=error msg="can't read dashboard provisioning files from directory" path=/etc/grafana/provisioning/dashboards error="open /etc/grafana/provisioning/dashboards: no such file or directory"
hongbomiao-mypanelplugin-panel | logger=ngalert.multiorg.alertmanager t=2022-11-02T21:40:41.912154596Z level=info msg="starting MultiOrg Alertmanager"
hongbomiao-mypanelplugin-panel | logger=context traceID=00000000000000000000000000000000 userId=0 orgId=1 uname= t=2022-11-02T21:40:50.505230044Z level=info msg="Request Completed" method=GET path=/api/live/ws status=0 remote_addr=172.23.0.1 time_ms=3 duration=3.0385ms size=0 referer= traceID=00000000000000000000000000000000
hongbomiao-mypanelplugin-panel | logger=live t=2022-11-02T21:40:50.528537336Z level=info msg="Initialized channel handler" channel=grafana/dashboard/uid/u7hG9eN4z address=grafana/dashboard/uid/u7hG9eN4z
hongbomiao-mypanelplugin-panel | logger=live.features t=2022-11-02T21:40:50.528954711Z level=error msg="Error getting dashboard" query="{Slug: Id:0 Uid:u7hG9eN4z OrgId:1 Result:<nil>}" error="Dashboard not found"
The Grafana dashboard at http://localhost:3000/dashboards is empty:
Based on the docker log, the provisioning folder is empty, which seems the issue. But then I found https://github.com/grafana/plugin-tools/issues/56 saying provisioning is designed to be not generated any more now.
How to show custom Grafana plugin in the Dashboard correctly?
I just found because the new version of the #grafana/create-plugin does not generate the provisioning folder any more, which means I have to add by meself.
However, the docker log is kind of misleading as it still expects the provisioning folder.
(The docker-compose.yml file got generated by the template. You can remove this line corresponding to generated docker-compose.yml, then it won't throw any error.)
For "panel" plugin, here are the steps below. Similar to "data source" and "app" plugins, I need add manually at their own places.

k3s multimaster with embedded etcd is failing to form join cluster

I have two fresh ubuntu VM(s)
VM-1 (65.0.54.158)
VM-2 (65.2.136.2)
I am trying to set up a HA k3s cluster with embedded ETCD. I am referring to the official document
Here is what I have executed on VM-1
curl -sfL https://get.k3s.io | K3S_TOKEN=AtJMEyWR8pE3HR4RWgT6IsqglOkBm0sLC4n0aDBkng9VE1uqyNevR6oCMNCqQNaF sh -s - server --cluster-init
Here is the response from VM-1
curl -sfL https://get.k3s.io | K3S_TOKEN=AtJMEyWR8pE3HR4RWgT6IsqglOkBm0sLC4n0aDBkng9VE1uqyNevR6oCMNCqQNaF sh -s - server --cluster-init
[INFO] Finding release for channel stable
[INFO] Using v1.24.4+k3s1 as release
[INFO] Downloading hash https://github.com/k3s-io/k3s/releases/download/v1.24.4+k3s1/sha256sum-amd64.txt
[INFO] Downloading binary https://github.com/k3s-io/k3s/releases/download/v1.24.4+k3s1/k3s
[INFO] Verifying binary download
[INFO] Installing k3s to /usr/local/bin/k3s
[INFO] Skipping installation of SELinux RPM
[INFO] Creating /usr/local/bin/kubectl symlink to k3s
[INFO] Creating /usr/local/bin/crictl symlink to k3s
[INFO] Creating /usr/local/bin/ctr symlink to k3s
[INFO] Creating killall script /usr/local/bin/k3s-killall.sh
[INFO] Creating uninstall script /usr/local/bin/k3s-uninstall.sh
[INFO] env: Creating environment file /etc/systemd/system/k3s.service.env
[INFO] systemd: Creating service file /etc/systemd/system/k3s.service
[INFO] systemd: Enabling k3s unit
Created symlink /etc/systemd/system/multi-user.target.wants/k3s.service → /etc/systemd/system/k3s.service.
[INFO] systemd: Starting k3s
additionally, I have checked
sudo kubectl get nodes
and this worked perfectly
NAME STATUS ROLES AGE VERSION
ip-172-31-41-34 Ready control-plane,etcd,master 18m v1.24.4+k3s1
Now I am going to ssh into VM-2 and make it join the server running on VM-1
curl -sfL https://get.k3s.io | K3S_TOKEN=AtJMEyWR8pE3HR4RWgT6IsqglOkBm0sLC4n0aDBkng9VE1uqyNevR6oCMNCqQNaF sh -s - server --server https://65.0.54.158:6443
response
[INFO] Finding release for channel stable
[INFO] Using v1.24.4+k3s1 as release
[INFO] Downloading hash https://github.com/k3s-io/k3s/releases/download/v1.24.4+k3s1/sha256sum-amd64.txt
[INFO] Downloading binary https://github.com/k3s-io/k3s/releases/download/v1.24.4+k3s1/k3s
[INFO] Verifying binary download
[INFO] Installing k3s to /usr/local/bin/k3s
[INFO] Skipping installation of SELinux RPM
[INFO] Creating /usr/local/bin/kubectl symlink to k3s
[INFO] Creating /usr/local/bin/crictl symlink to k3s
[INFO] Creating /usr/local/bin/ctr symlink to k3s
[INFO] Creating killall script /usr/local/bin/k3s-killall.sh
[INFO] Creating uninstall script /usr/local/bin/k3s-uninstall.sh
[INFO] env: Creating environment file /etc/systemd/system/k3s.service.env
[INFO] systemd: Creating service file /etc/systemd/system/k3s.service
[INFO] systemd: Enabling k3s unit
Created symlink /etc/systemd/system/multi-user.target.wants/k3s.service → /etc/systemd/system/k3s.service.
[INFO] systemd: Starting k3s
Job for k3s.service failed because the control process exited with error code.
See "systemctl status k3s.service" and "journalctl -xe" for details
here is the contents of /var/log/syslog
Sep 6 19:10:00 ip-172-31-46-114 systemd[1]: Starting Lightweight Kubernetes...
Sep 6 19:10:00 ip-172-31-46-114 sh[9516]: + /usr/bin/systemctl is-enabled --quiet nm-cloud-setup.service
Sep 6 19:10:00 ip-172-31-46-114 sh[9517]: Failed to get unit file state for nm-cloud-setup.service: No such file or directory
Sep 6 19:10:00 ip-172-31-46-114 k3s[9520]: time="2022-09-06T19:10:00Z" level=info msg="Acquiring lock file /var/lib/rancher/k3s/data/.lock"
Sep 6 19:10:00 ip-172-31-46-114 k3s[9520]: time="2022-09-06T19:10:00Z" level=info msg="Preparing data dir /var/lib/rancher/k3s/data/577968fa3d58539cc4265245941b7be688833e6bf5ad7869fa2afe02f15f1cd2"
Sep 6 19:10:02 ip-172-31-46-114 k3s[9520]: time="2022-09-06T19:10:02Z" level=info msg="Starting k3s v1.24.4+k3s1 (c3f830e9)"
Sep 6 19:10:22 ip-172-31-46-114 k3s[9520]: time="2022-09-06T19:10:22Z" level=fatal msg="starting kubernetes: preparing server: failed to get CA certs: Get \"https://65.0.54.158:6443/cacerts\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)"
Sep 6 19:10:22 ip-172-31-46-114 systemd[1]: k3s.service: Main process exited, code=exited, status=1/FAILURE
Sep 6 19:10:22 ip-172-31-46-114 systemd[1]: k3s.service: Failed with result 'exit-code'.
Sep 6 19:10:22 ip-172-31-46-114 systemd[1]: Failed to start Lightweight Kubernetes.
I am stuck at this for two days. I would really appreciate some help. Thank you.

Cannot connect to haproxy exporter

I have configured HAProxy locally in my machine and I have tried to read the metrics of it using haproxy_exporter.
I am using the docker image of the haproxy_exporter and used the below-given command to execute it.
docker run -p 9101:9101 prom/haproxy-exporter:latest --haproxy.scrape-uri="http://localhost:8181/stats;csv"
If I try to reach the metric endpoint, I am getting this error saying that the connection is refused. What am I doing wrong?
ts=2022-05-06T10:03:49.462Z caller=haproxy_exporter.go:584 level=info msg="Starting haproxy_exporter" version="(version=0.13.0, branch=HEAD, revision=c5c72aa059b69c18ab38fd63777653c13eddaa7f)"
ts=2022-05-06T10:03:49.462Z caller=haproxy_exporter.go:585 level=info msg="Build context" context="(go=go1.17.3, user=root#77b4a325967c, date=20211126-09:54:41)"
ts=2022-05-06T10:03:49.462Z caller=haproxy_exporter.go:603 level=info msg="Listening on address" address=:9101
ts=2022-05-06T10:03:49.464Z caller=tls_config.go:195 level=info msg="TLS is disabled." http2=false
ts=2022-05-06T10:03:56.366Z caller=haproxy_exporter.go:399 level=error msg="Can't scrape HAProxy" err="Get \"http://localhost:8181/stats;csv\": dial tcp 127.0.0.1:8181: connect: connection refused"

Weavescope on Microk8s doesn't recognize containers

I'm running a Microk8s single-node cluster and just installed Weavescope, however it doesn't recognize any containers running. I can see my pods and services though fine, however each pod simply states "0 containers" underneath.
logs from the weavescope agent and app pods indicate that something is very wrong, but I'm not adept enough with Kubernetes to know how to deal with the errors.
Logs from Weavescope agent:
microk8s.kubectl logs -n weave weave-scope-cluster-agent-7944c858c9-bszjw
time="2020-05-23T14:56:10Z" level=info msg="publishing to: weave-scope-app.weave.svc.cluster.local:80"
<probe> INFO: 2020/05/23 14:56:10.378586 Basic authentication disabled
<probe> INFO: 2020/05/23 14:56:10.439179 command line args: --mode=probe --probe-only=true --probe.http.listen=:4041 --probe.kubernetes.role=cluster --probe.publish.interval=4.5s --probe.spy.interval=2s weave-scope-app.weave.svc.cluster.local:80
<probe> INFO: 2020/05/23 14:56:10.439215 probe starting, version 1.13.1, ID 6336ff46bcd86913
<probe> ERRO: 2020/05/23 14:56:10.439261 Error getting docker bridge ip: route ip+net: no such network interface
<probe> INFO: 2020/05/23 14:56:10.439487 kubernetes: targeting api server https://10.152.183.1:443
<probe> ERRO: 2020/05/23 14:56:10.440206 plugins: problem loading: no such file or directory
<probe> INFO: 2020/05/23 14:56:10.444345 Profiling data being exported to :4041
<probe> INFO: 2020/05/23 14:56:10.444355 go tool pprof http://:4041/debug/pprof/{profile,heap,block}
<probe> WARN: 2020/05/23 14:56:10.444505 Error collecting weave status, backing off 10s: Get http://127.0.0.1:6784/report: dial tcp 127.0.0.1:6784: connect: connection refused. If you are not running Weave Net, you may wish to suppress this warning by launching scope with the `--weave=false` option.
<probe> INFO: 2020/05/23 14:56:10.506596 volumesnapshotdatas are not supported by this Kubernetes version
<probe> INFO: 2020/05/23 14:56:10.506950 volumesnapshots are not supported by this Kubernetes version
<probe> INFO: 2020/05/23 14:56:11.559811 Control connection to weave-scope-app.weave.svc.cluster.local starting
<probe> INFO: 2020/05/23 14:56:14.948382 Publish loop for weave-scope-app.weave.svc.cluster.local starting
<probe> WARN: 2020/05/23 14:56:20.447578 Error collecting weave status, backing off 20s: Get http://127.0.0.1:6784/report: dial tcp 127.0.0.1:6784: connect: connection refused. If you are not running Weave Net, you may wish to suppress this warning by launching scope with the `--weave=false` option.
<probe> WARN: 2020/05/23 14:56:40.451421 Error collecting weave status, backing off 40s: Get http://127.0.0.1:6784/report: dial tcp 127.0.0.1:6784: connect: connection refused. If you are not running Weave Net, you may wish to suppress this warning by launching scope with the `--weave=false` option.
<probe> INFO: 2020/05/23 15:19:12.825869 Pipe pipe-7287306037502507515 connection to weave-scope-app.weave.svc.cluster.local starting
<probe> INFO: 2020/05/23 15:19:16.509232 Pipe pipe-7287306037502507515 connection to weave-scope-app.weave.svc.cluster.local exiting
logs from Weavescope app:
microk8s.kubectl logs -n weave weave-scope-app-bc7444d59-csxjd
<app> INFO: 2020/05/23 14:56:11.221084 app starting, version 1.13.1, ID 5e3953d1209f7147
<app> INFO: 2020/05/23 14:56:11.221114 command line args: --mode=app
<app> INFO: 2020/05/23 14:56:11.275231 Basic authentication disabled
<app> INFO: 2020/05/23 14:56:11.290717 listening on :4040
<app> WARN: 2020/05/23 14:56:11.340182 Error updating weaveDNS, backing off 20s: Error running weave ps: exit status 1: "Link not found\n". If you are not running Weave Net, you may wish to suppress this warning by launching scope with the `--weave=false` option.
<app> WARN: 2020/05/23 14:56:31.457702 Error updating weaveDNS, backing off 40s: Error running weave ps: exit status 1: "Link not found\n". If you are not running Weave Net, you may wish to suppress this warning by launching scope with the `--weave=false` option.
<app> ERRO: 2020/05/23 15:19:16.504169 Error copying to pipe pipe-7287306037502507515 (1) websocket: io: read/write on closed pipe

prometheus 2.0.0 error msg="Opening storage failed" err="open DB in /home/prometheus: Lockfile created, but doesn't exist"

Context : Trying to use prometheus on k8s 1.10.2 and using azure file storage as persistent storage medium
Problem : using azurefile storage with prometheus gives me the following error :
level=info ts=2018-06-29T11:08:50.603235475Z caller=main.go:215 msg="Starting Prometheus" version="(version=2.0.0, branch=HEAD, revision=0a74f98628a0463dddc90528220c94de5032d1a0)"
level=info ts=2018-06-29T11:08:50.603302775Z caller=main.go:216 build_context="(go=go1.9.2, user=root#615b82cb36b6, date=20171108-07:11:59)"
level=info ts=2018-06-29T11:08:50.603341576Z caller=main.go:217 host_details="(Linux 4.15.0-1013-azure #13~16.04.2-Ubuntu SMP Wed May 30 01:39:27 UTC 2018 x86_64 prometheus-84f89cd668-r8p5r (none))"
level=info ts=2018-06-29T11:08:50.605677083Z caller=web.go:380 component=web msg="Start listening for connections" address=0.0.0.0:9090
level=info ts=2018-06-29T11:08:50.605759983Z caller=main.go:314 msg="Starting TSDB"
level=info ts=2018-06-29T11:08:50.605816483Z caller=targetmanager.go:71 component="target manager" msg="Starting target manager..."
level=error ts=2018-06-29T11:08:50.778059828Z caller=main.go:323 msg="Opening storage failed" err="open DB in /home/prometheus: Lockfile created, but doesn't exist"
Note : I do not want to use the --storage.tsdb.no-lockfile flag on the prometheus deploymenet
Is there any other way i can fix this issue?
Thanks for any input on how to fix this issue.