prometheus 2.0.0 error msg="Opening storage failed" err="open DB in /home/prometheus: Lockfile created, but doesn't exist" - kubernetes

Context : Trying to use prometheus on k8s 1.10.2 and using azure file storage as persistent storage medium
Problem : using azurefile storage with prometheus gives me the following error :
level=info ts=2018-06-29T11:08:50.603235475Z caller=main.go:215 msg="Starting Prometheus" version="(version=2.0.0, branch=HEAD, revision=0a74f98628a0463dddc90528220c94de5032d1a0)"
level=info ts=2018-06-29T11:08:50.603302775Z caller=main.go:216 build_context="(go=go1.9.2, user=root#615b82cb36b6, date=20171108-07:11:59)"
level=info ts=2018-06-29T11:08:50.603341576Z caller=main.go:217 host_details="(Linux 4.15.0-1013-azure #13~16.04.2-Ubuntu SMP Wed May 30 01:39:27 UTC 2018 x86_64 prometheus-84f89cd668-r8p5r (none))"
level=info ts=2018-06-29T11:08:50.605677083Z caller=web.go:380 component=web msg="Start listening for connections" address=0.0.0.0:9090
level=info ts=2018-06-29T11:08:50.605759983Z caller=main.go:314 msg="Starting TSDB"
level=info ts=2018-06-29T11:08:50.605816483Z caller=targetmanager.go:71 component="target manager" msg="Starting target manager..."
level=error ts=2018-06-29T11:08:50.778059828Z caller=main.go:323 msg="Opening storage failed" err="open DB in /home/prometheus: Lockfile created, but doesn't exist"
Note : I do not want to use the --storage.tsdb.no-lockfile flag on the prometheus deploymenet
Is there any other way i can fix this issue?
Thanks for any input on how to fix this issue.

Related

How to show custom Grafana plugin in Grafana dashboard correctly?

I am trying to create a Grafana plugin by #grafana/create-plugin.
Based on the readme, first, I generated a plugin by
➜ npx #grafana/create-plugin
? What is going to be the name of your plugin? my-panel-plugin
? What is the organization name of your plugin? hongbomiao
? How would you describe your plugin?
? What kind of plugin would you like? panel
? Do you want to add Github CI and Release workflows? No
? Do you want to add a Github workflow for automatically checking "Grafana API compatibility" on PRs? No
✔ ++ /Users/hongbo-miao/Clouds/Git/new-grafana-plugins/hongbomiao-mypanelplugin-panel/.config/.eslintrc
✔ ++ /Users/hongbo-miao/Clouds/Git/new-grafana-plugins/hongbomiao-mypanelplugin-panel/.config/.prettierrc.js
✔ ++ /Users/hongbo-miao/Clouds/Git/new-grafana-plugins/hongbomiao-mypanelplugin-panel/.config/Dockerfile
✔ ++ /Users/hongbo-miao/Clouds/Git/new-grafana-plugins/hongbomiao-mypanelplugin-panel/.config/jest-setup.js
✔ ++ /Users/hongbo-miao/Clouds/Git/new-grafana-plugins/hongbomiao-mypanelplugin-panel/.config/jest.config.js
✔ ++ /Users/hongbo-miao/Clouds/Git/new-grafana-plugins/hongbomiao-mypanelplugin-panel/.config/mocks/react-inlinesvg.tsx
✔ ++ /Users/hongbo-miao/Clouds/Git/new-grafana-plugins/hongbomiao-mypanelplugin-panel/.config/README.md
✔ ++ /Users/hongbo-miao/Clouds/Git/new-grafana-plugins/hongbomiao-mypanelplugin-panel/.config/tsconfig.json
✔ ++ /Users/hongbo-miao/Clouds/Git/new-grafana-plugins/hongbomiao-mypanelplugin-panel/.config/types/custom.d.ts
✔ ++ /Users/hongbo-miao/Clouds/Git/new-grafana-plugins/hongbomiao-mypanelplugin-panel/.config/webpack/constants.ts
✔ ++ /Users/hongbo-miao/Clouds/Git/new-grafana-plugins/hongbomiao-mypanelplugin-panel/.config/webpack/utils.ts
✔ ++ /Users/hongbo-miao/Clouds/Git/new-grafana-plugins/hongbomiao-mypanelplugin-panel/.config/webpack/webpack.config.ts
✔ ++ /Users/hongbo-miao/Clouds/Git/new-grafana-plugins/hongbomiao-mypanelplugin-panel/.eslintrc
✔ ++ /Users/hongbo-miao/Clouds/Git/new-grafana-plugins/hongbomiao-mypanelplugin-panel/.nvmrc
✔ ++ /Users/hongbo-miao/Clouds/Git/new-grafana-plugins/hongbomiao-mypanelplugin-panel/.prettierrc.js
✔ ++ /Users/hongbo-miao/Clouds/Git/new-grafana-plugins/hongbomiao-mypanelplugin-panel/CHANGELOG.md
✔ ++ /Users/hongbo-miao/Clouds/Git/new-grafana-plugins/hongbomiao-mypanelplugin-panel/cypress/integration/01-smoke.spec.ts
✔ ++ /Users/hongbo-miao/Clouds/Git/new-grafana-plugins/hongbomiao-mypanelplugin-panel/docker-compose.yaml
✔ ++ /Users/hongbo-miao/Clouds/Git/new-grafana-plugins/hongbomiao-mypanelplugin-panel/.gitignore
✔ ++ /Users/hongbo-miao/Clouds/Git/new-grafana-plugins/hongbomiao-mypanelplugin-panel/jest-setup.js
✔ ++ /Users/hongbo-miao/Clouds/Git/new-grafana-plugins/hongbomiao-mypanelplugin-panel/jest.config.js
✔ ++ /Users/hongbo-miao/Clouds/Git/new-grafana-plugins/hongbomiao-mypanelplugin-panel/LICENSE
✔ ++ /Users/hongbo-miao/Clouds/Git/new-grafana-plugins/hongbomiao-mypanelplugin-panel/package.json
✔ ++ /Users/hongbo-miao/Clouds/Git/new-grafana-plugins/hongbomiao-mypanelplugin-panel/src/img/logo.svg
✔ ++ /Users/hongbo-miao/Clouds/Git/new-grafana-plugins/hongbomiao-mypanelplugin-panel/src/README.md
✔ ++ /Users/hongbo-miao/Clouds/Git/new-grafana-plugins/hongbomiao-mypanelplugin-panel/tsconfig.json
✔ ++ /Users/hongbo-miao/Clouds/Git/new-grafana-plugins/hongbomiao-mypanelplugin-panel/README.md
✔ ++ /Users/hongbo-miao/Clouds/Git/new-grafana-plugins/hongbomiao-mypanelplugin-panel/src/components/SimplePanel.tsx
✔ ++ /Users/hongbo-miao/Clouds/Git/new-grafana-plugins/hongbomiao-mypanelplugin-panel/src/module.test.ts
✔ ++ /Users/hongbo-miao/Clouds/Git/new-grafana-plugins/hongbomiao-mypanelplugin-panel/src/module.ts
✔ ++ /Users/hongbo-miao/Clouds/Git/new-grafana-plugins/hongbomiao-mypanelplugin-panel/src/plugin.json
✔ ++ /Users/hongbo-miao/Clouds/Git/new-grafana-plugins/hongbomiao-mypanelplugin-panel/src/types.ts
✔ +- /Users/hongbo-miao/Clouds/Git/new-grafana-plugins/hongbomiao-mypanelplugin-panel/README.md
✔ +- /Users/hongbo-miao/Clouds/Git/new-grafana-plugins/hongbomiao-mypanelplugin-panel/README.md
Then I ran yarn install with
➜ yarn dev
yarn run v1.22.19
$ webpack -w -c ./.config/webpack/webpack.config.ts --env development
<e> [LiveReloadPlugin] Live Reload disabled: listen EADDRINUSE: address already in use :::35729
assets by path *.md 183 bytes
asset README.md 131 bytes [emitted] [from: README.md] [copied]
asset CHANGELOG.md 52 bytes [emitted] [from: ../CHANGELOG.md] [copied]
asset module.js 159 KiB [emitted] (name: module)
asset LICENSE 11.1 KiB [emitted] [from: ../LICENSE] [copied]
asset img/logo.svg 1.55 KiB [emitted] [from: img/logo.svg] [copied]
asset plugin.json 891 bytes [emitted] [from: plugin.json] [copied]
runtime modules 1.25 KiB 6 modules
modules by path ../node_modules/lodash/*.js 32 KiB
../node_modules/lodash/defaults.js 1.71 KiB [built] [code generated]
../node_modules/lodash/_baseRest.js 559 bytes [built] [code generated]
../node_modules/lodash/eq.js 799 bytes [built] [code generated]
+ 42 modules
modules by path ./ 10.5 KiB
modules by path ./*.ts 2.92 KiB 3 modules
modules by path ./components/*.tsx 7.63 KiB
./components/ConfigEditor.tsx 4.38 KiB [built] [code generated]
./components/QueryEditor.tsx 3.25 KiB [built] [code generated]
modules by path external "#grafana/ 84 bytes
external "#grafana/data" 42 bytes [built] [code generated]
external "#grafana/ui" 42 bytes [built] [code generated]
external "react" 42 bytes [built] [code generated]
webpack 5.74.0 compiled successfully in 396 ms
Type-checking in progress...
assets by status 173 KiB [cached] 6 assets
cached modules 42.7 KiB (javascript) 1.25 KiB (runtime) [cached] 59 modules
webpack 5.74.0 compiled successfully in 196 ms
Type-checking in progress...
No errors found.
After that, I opened a new terminal and ran yarn server:
➜ yarn server
yarn run v1.22.19
$ docker-compose up --build
[+] Building 1.3s (7/7) FINISHED
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 755B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for docker.io/grafana/grafana:9.1.2 1.2s
=> [auth] grafana/grafana:pull token for registry-1.docker.io 0.0s
=> [1/2] FROM docker.io/grafana/grafana:9.1.2#sha256:980ff2697655a0aa5718e40bbda6ac52299d2f3b1584d0081152e2d0a4742078 0.0s
=> CACHED [2/2] RUN sed -i 's/<\/body><\/html>/<script src=\"http:\/\/localhost:35729\/livereload.js\"><\/script><\/body><\/html>/g' /usr/share/grafana/public/views/index.html 0.0s
=> exporting to image 0.0s
=> => exporting layers 0.0s
=> => writing image sha256:f776788e5baad34844582c7991d67ac7b5a4c6ff83b6496af5f0f0815aa58198 0.0s
=> => naming to docker.io/library/hongbomiao-mypanelplugin-panel-grafana 0.0s
Use 'docker scan' to run Snyk tests against images to find vulnerabilities and learn how to fix them
[+] Running 1/0
⠿ Container hongbomiao-mypanelplugin-panel Created 0.0s
Attaching to hongbomiao-mypanelplugin-panel
hongbomiao-mypanelplugin-panel | Grafana server is running with elevated privileges. This is not recommended
hongbomiao-mypanelplugin-panel | logger=settings t=2022-11-02T21:40:41.789836804Z level=info msg="Starting Grafana" version=9.1.2 commit=3c13120cde branch=HEAD compiled=2022-08-30T11:31:21Z
hongbomiao-mypanelplugin-panel | logger=settings t=2022-11-02T21:40:41.789958054Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini
hongbomiao-mypanelplugin-panel | logger=settings t=2022-11-02T21:40:41.789978013Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini
hongbomiao-mypanelplugin-panel | logger=settings t=2022-11-02T21:40:41.789981054Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana"
hongbomiao-mypanelplugin-panel | logger=settings t=2022-11-02T21:40:41.789984429Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana"
hongbomiao-mypanelplugin-panel | logger=settings t=2022-11-02T21:40:41.789986554Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins"
hongbomiao-mypanelplugin-panel | logger=settings t=2022-11-02T21:40:41.789988804Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning"
hongbomiao-mypanelplugin-panel | logger=settings t=2022-11-02T21:40:41.789991013Z level=info msg="Config overridden from command line" arg="default.log.mode=console"
hongbomiao-mypanelplugin-panel | logger=settings t=2022-11-02T21:40:41.789993304Z level=info msg="Config overridden from Environment variable" var="GF_DEFAULT_APP_MODE=development"
hongbomiao-mypanelplugin-panel | logger=settings t=2022-11-02T21:40:41.789995596Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana"
hongbomiao-mypanelplugin-panel | logger=settings t=2022-11-02T21:40:41.789997679Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana"
hongbomiao-mypanelplugin-panel | logger=settings t=2022-11-02T21:40:41.789999763Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins"
hongbomiao-mypanelplugin-panel | logger=settings t=2022-11-02T21:40:41.790001888Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning"
hongbomiao-mypanelplugin-panel | logger=settings t=2022-11-02T21:40:41.790004013Z level=info msg="Config overridden from Environment variable" var="GF_AUTH_ANONYMOUS_ENABLED=true"
hongbomiao-mypanelplugin-panel | logger=settings t=2022-11-02T21:40:41.790006263Z level=info msg="Config overridden from Environment variable" var="GF_AUTH_ANONYMOUS_ORG_ROLE=Admin"
hongbomiao-mypanelplugin-panel | logger=settings t=2022-11-02T21:40:41.790008429Z level=info msg="Config overridden from Environment variable" var="GF_AUTH_BASIC_ENABLED=false"
hongbomiao-mypanelplugin-panel | logger=settings t=2022-11-02T21:40:41.790010638Z level=info msg="Path Home" path=/usr/share/grafana
hongbomiao-mypanelplugin-panel | logger=settings t=2022-11-02T21:40:41.790012888Z level=info msg="Path Data" path=/var/lib/grafana
hongbomiao-mypanelplugin-panel | logger=settings t=2022-11-02T21:40:41.790015179Z level=info msg="Path Logs" path=/var/log/grafana
hongbomiao-mypanelplugin-panel | logger=settings t=2022-11-02T21:40:41.790017388Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins
hongbomiao-mypanelplugin-panel | logger=settings t=2022-11-02T21:40:41.790020388Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning
hongbomiao-mypanelplugin-panel | logger=settings t=2022-11-02T21:40:41.790022888Z level=info msg="App mode development"
hongbomiao-mypanelplugin-panel | logger=sqlstore t=2022-11-02T21:40:41.790065388Z level=info msg="Connecting to DB" dbtype=sqlite3
hongbomiao-mypanelplugin-panel | logger=migrator t=2022-11-02T21:40:41.800350013Z level=info msg="Starting DB migrations"
hongbomiao-mypanelplugin-panel | logger=migrator t=2022-11-02T21:40:41.803593388Z level=info msg="migrations completed" performed=0 skipped=443 duration=262.625µs
hongbomiao-mypanelplugin-panel | logger=plugin.manager t=2022-11-02T21:40:41.824215679Z level=info msg="Plugin registered" pluginId=input
hongbomiao-mypanelplugin-panel | logger=plugin.signature.validator t=2022-11-02T21:40:41.844108513Z level=warn msg="Permitting unsigned plugin. This is not recommended" pluginID=hongbomiao-mypanelplugin-panel pluginDir=/var/lib/grafana/plugins/hongbomiao-mypanelplugin-panel
hongbomiao-mypanelplugin-panel | logger=plugin.manager t=2022-11-02T21:40:41.844162429Z level=info msg="Plugin registered" pluginId=hongbomiao-mypanelplugin-panel
hongbomiao-mypanelplugin-panel | logger=secrets t=2022-11-02T21:40:41.844371721Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1
hongbomiao-mypanelplugin-panel | logger=query_data t=2022-11-02T21:40:41.845634221Z level=info msg="Query Service initialization"
hongbomiao-mypanelplugin-panel | logger=live.push_http t=2022-11-02T21:40:41.848155138Z level=info msg="Live Push Gateway initialization"
hongbomiao-mypanelplugin-panel | logger=ticker t=2022-11-02T21:40:41.857981054Z level=info msg=starting first_tick=2022-11-02T21:40:50Z
hongbomiao-mypanelplugin-panel | logger=infra.usagestats.collector t=2022-11-02T21:40:41.892286679Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2
hongbomiao-mypanelplugin-panel | logger=provisioning.datasources t=2022-11-02T21:40:41.897490221Z level=error msg="can't read datasource provisioning files from directory" path=/etc/grafana/provisioning/datasources error="open /etc/grafana/provisioning/datasources: no such file or directory"
hongbomiao-mypanelplugin-panel | logger=provisioning.plugins t=2022-11-02T21:40:41.898617513Z level=error msg="Failed to read plugin provisioning files from directory" path=/etc/grafana/provisioning/plugins error="open /etc/grafana/provisioning/plugins: no such file or directory"
hongbomiao-mypanelplugin-panel | logger=provisioning.notifiers t=2022-11-02T21:40:41.900185554Z level=error msg="Can't read alert notification provisioning files from directory" path=/etc/grafana/provisioning/notifiers error="open /etc/grafana/provisioning/notifiers: no such file or directory"
hongbomiao-mypanelplugin-panel | logger=provisioning.alerting t=2022-11-02T21:40:41.901483513Z level=error msg="can't read alerting provisioning files from directory" path=/etc/grafana/provisioning/alerting error="open /etc/grafana/provisioning/alerting: no such file or directory"
hongbomiao-mypanelplugin-panel | logger=provisioning.alerting t=2022-11-02T21:40:41.901517929Z level=info msg="starting to provision alerting"
hongbomiao-mypanelplugin-panel | logger=provisioning.alerting t=2022-11-02T21:40:41.901523679Z level=info msg="finished to provision alerting"
hongbomiao-mypanelplugin-panel | logger=grafanaStorageLogger t=2022-11-02T21:40:41.902017471Z level=info msg="storage starting"
hongbomiao-mypanelplugin-panel | logger=ngalert t=2022-11-02T21:40:41.903549763Z level=info msg="warming cache for startup"
hongbomiao-mypanelplugin-panel | logger=http.server t=2022-11-02T21:40:41.904667888Z level=info msg="HTTP Server Listen" address=[::]:3000 protocol=http subUrl= socket=
hongbomiao-mypanelplugin-panel | logger=provisioning.dashboard t=2022-11-02T21:40:41.904551679Z level=error msg="can't read dashboard provisioning files from directory" path=/etc/grafana/provisioning/dashboards error="open /etc/grafana/provisioning/dashboards: no such file or directory"
hongbomiao-mypanelplugin-panel | logger=ngalert.multiorg.alertmanager t=2022-11-02T21:40:41.912154596Z level=info msg="starting MultiOrg Alertmanager"
hongbomiao-mypanelplugin-panel | logger=context traceID=00000000000000000000000000000000 userId=0 orgId=1 uname= t=2022-11-02T21:40:50.505230044Z level=info msg="Request Completed" method=GET path=/api/live/ws status=0 remote_addr=172.23.0.1 time_ms=3 duration=3.0385ms size=0 referer= traceID=00000000000000000000000000000000
hongbomiao-mypanelplugin-panel | logger=live t=2022-11-02T21:40:50.528537336Z level=info msg="Initialized channel handler" channel=grafana/dashboard/uid/u7hG9eN4z address=grafana/dashboard/uid/u7hG9eN4z
hongbomiao-mypanelplugin-panel | logger=live.features t=2022-11-02T21:40:50.528954711Z level=error msg="Error getting dashboard" query="{Slug: Id:0 Uid:u7hG9eN4z OrgId:1 Result:<nil>}" error="Dashboard not found"
The Grafana dashboard at http://localhost:3000/dashboards is empty:
Based on the docker log, the provisioning folder is empty, which seems the issue. But then I found https://github.com/grafana/plugin-tools/issues/56 saying provisioning is designed to be not generated any more now.
How to show custom Grafana plugin in the Dashboard correctly?
I just found because the new version of the #grafana/create-plugin does not generate the provisioning folder any more, which means I have to add by meself.
However, the docker log is kind of misleading as it still expects the provisioning folder.
(The docker-compose.yml file got generated by the template. You can remove this line corresponding to generated docker-compose.yml, then it won't throw any error.)
For "panel" plugin, here are the steps below. Similar to "data source" and "app" plugins, I need add manually at their own places.

Cannot connect to haproxy exporter

I have configured HAProxy locally in my machine and I have tried to read the metrics of it using haproxy_exporter.
I am using the docker image of the haproxy_exporter and used the below-given command to execute it.
docker run -p 9101:9101 prom/haproxy-exporter:latest --haproxy.scrape-uri="http://localhost:8181/stats;csv"
If I try to reach the metric endpoint, I am getting this error saying that the connection is refused. What am I doing wrong?
ts=2022-05-06T10:03:49.462Z caller=haproxy_exporter.go:584 level=info msg="Starting haproxy_exporter" version="(version=0.13.0, branch=HEAD, revision=c5c72aa059b69c18ab38fd63777653c13eddaa7f)"
ts=2022-05-06T10:03:49.462Z caller=haproxy_exporter.go:585 level=info msg="Build context" context="(go=go1.17.3, user=root#77b4a325967c, date=20211126-09:54:41)"
ts=2022-05-06T10:03:49.462Z caller=haproxy_exporter.go:603 level=info msg="Listening on address" address=:9101
ts=2022-05-06T10:03:49.464Z caller=tls_config.go:195 level=info msg="TLS is disabled." http2=false
ts=2022-05-06T10:03:56.366Z caller=haproxy_exporter.go:399 level=error msg="Can't scrape HAProxy" err="Get \"http://localhost:8181/stats;csv\": dial tcp 127.0.0.1:8181: connect: connection refused"

disable wal replay on crash for prometheus

Is there a way to disable WAL replay on crash for Prometheus?
It takes a while for a pod to come back up due to WAL replay:
We can afford to lose some metrics if it meant faster recovery after the crash.
level=info ts=2021-04-22T20:13:42.568Z caller=head.go:714 component=tsdb msg="WAL segment loaded" segment=449 maxSegment=513
level=info ts=2021-04-22T20:13:57.555Z caller=head.go:714 component=tsdb msg="WAL segment loaded" segment=450 maxSegment=513
level=info ts=2021-04-22T20:14:12.222Z caller=head.go:714 component=tsdb msg="WAL segment loaded" segment=451 maxSegment=513
level=info ts=2021-04-22T20:14:25.491Z caller=head.go:714 component=tsdb msg="WAL segment loaded" segment=452 maxSegment=513
level=info ts=2021-04-22T20:14:39.258Z caller=head.go:714 component=tsdb msg="WAL segment loaded" segment=453 maxSegment=513
Not specifically that I'm aware of. You would have to rm -rf wal/ before starting Prom. Usually better to run multiple via Thanos or Cortex than to go down this path.

How to communicate with a gitlab service container

I have the following .gitlab-ci.yml file:
stages:
- scan
scanning:
stage: scan
image: docker:19.03.6
services:
- name: arminc/clair-db:latest
- name: docker:dind
before_script:
- docker login -u $DOCKER_USERNAME -p $DOCKER_PASSWORD
script:
- export LOCAL_MACHINE_IP_ADDRESS=arminc-clair-db
- ping -c 4 $LOCAL_MACHINE_IP_ADDRESS:5432 #Pinging 'arminc-clair-db:5432' to prove that it IS accessible
- docker run --interactive --rm --volume "$PWD":/tmp/app -e CI_PROJECT_DIR=/tmp/app -e CLAIR_DB_CONNECTION_STRING="postgresql://postgres:password#${LOCAL_MACHINE_IP_ADDRESS}:5432/postgres?sslmode=disable&statement_timeout=60000" -e CI_APPLICATION_REPOSITORY=vismarkjuarez1994/codigo-initiative -e CI_APPLICATION_TAG=latest registry.gitlab.com/gitlab-org/security-products/analyzers/klar
Everything runs just fine until the last script command, because the host arminc-clair-db:5432 (which is a service) cannot be resolved. How do I get my docker container to "see" and communicate with the arminc/clair-db container?
Below are all the output logs, with the error at the bottom:
Running with gitlab-runner 13.1.0 (6214287e)
on docker-auto-scale fa6cab46
Preparing the "docker+machine" executor
01:18
Using Docker executor with image docker:19.03.6 ...
Starting service arminc/clair-db:latest ...
Pulling docker image arminc/clair-db:latest ...
Using docker image sha256:032e46f9e42c3f26280ed984de737e5d3d1ca99bb641414b13226c6c62556feb for arminc/clair-db:latest ...
Starting service docker:dind ...
Pulling docker image docker:dind ...
Using docker image sha256:d5d139be840a6ffa04348fc87740e8c095cade6e9cb977785fdba51e5fd7ffec for docker:dind ...
Waiting for services to be up and running...
*** WARNING: Service runner-fa6cab46-project-19334692-concurrent-0-f3fcf99fb2cfbb7e-docker-1 probably didn't start properly.
Health check error:
service "runner-fa6cab46-project-19334692-concurrent-0-f3fcf99fb2cfbb7e-docker-1-wait-for-service" timeout
Health check container logs:
Service container logs:
2020-07-13T17:20:21.309966089Z time="2020-07-13T17:20:21.294373440Z" level=info msg="Starting up"
2020-07-13T17:20:21.310002707Z time="2020-07-13T17:20:21.300572503Z" level=warning msg="could not change group /var/run/docker.sock to docker: group docker not found"
2020-07-13T17:20:21.310007147Z time="2020-07-13T17:20:21.302510800Z" level=warning msg="[!] DON'T BIND ON ANY IP ADDRESS WITHOUT setting --tlsverify IF YOU DON'T KNOW WHAT YOU'RE DOING [!]"
2020-07-13T17:20:21.310010666Z time="2020-07-13T17:20:21.307985849Z" level=info msg="libcontainerd: started new containerd process" pid=18
2020-07-13T17:20:21.320081834Z time="2020-07-13T17:20:21.312042380Z" level=info msg="parsed scheme: \"unix\"" module=grpc
2020-07-13T17:20:21.320098887Z time="2020-07-13T17:20:21.312066080Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
2020-07-13T17:20:21.320103371Z time="2020-07-13T17:20:21.312091196Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0 <nil>}] <nil>}" module=grpc
2020-07-13T17:20:21.320107440Z time="2020-07-13T17:20:21.312100727Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
2020-07-13T17:20:21.409230132Z time="2020-07-13T17:20:21.365046801Z" level=info msg="starting containerd" revision=7ad184331fa3e55e52b890ea95e65ba581ae3429 version=v1.2.13
2020-07-13T17:20:21.409258920Z time="2020-07-13T17:20:21.380378683Z" level=info msg="loading plugin "io.containerd.content.v1.content"..." type=io.containerd.content.v1
2020-07-13T17:20:21.409263750Z time="2020-07-13T17:20:21.380477131Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.btrfs"..." type=io.containerd.snapshotter.v1
2020-07-13T17:20:21.409268507Z time="2020-07-13T17:20:21.380709522Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.btrfs" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter"
2020-07-13T17:20:21.409274034Z time="2020-07-13T17:20:21.380721072Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.aufs"..." type=io.containerd.snapshotter.v1
2020-07-13T17:20:21.409277876Z time="2020-07-13T17:20:21.401607600Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.aufs" error="modprobe aufs failed: "ip: can't find device 'aufs'\nmodprobe: can't change directory to '/lib/modules': No such file or directory\n": exit status 1"
2020-07-13T17:20:21.409282127Z time="2020-07-13T17:20:21.401634659Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.native"..." type=io.containerd.snapshotter.v1
2020-07-13T17:20:21.409286166Z time="2020-07-13T17:20:21.401762230Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.overlayfs"..." type=io.containerd.snapshotter.v1
2020-07-13T17:20:21.409290021Z time="2020-07-13T17:20:21.402131753Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.zfs"..." type=io.containerd.snapshotter.v1
2020-07-13T17:20:21.409293511Z time="2020-07-13T17:20:21.402413697Z" level=info msg="skip loading plugin "io.containerd.snapshotter.v1.zfs"..." type=io.containerd.snapshotter.v1
2020-07-13T17:20:21.409296992Z time="2020-07-13T17:20:21.402424580Z" level=info msg="loading plugin "io.containerd.metadata.v1.bolt"..." type=io.containerd.metadata.v1
2020-07-13T17:20:21.409309649Z time="2020-07-13T17:20:21.402470351Z" level=warning msg="could not use snapshotter btrfs in metadata plugin" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter"
2020-07-13T17:20:21.409313855Z time="2020-07-13T17:20:21.402491305Z" level=warning msg="could not use snapshotter aufs in metadata plugin" error="modprobe aufs failed: "ip: can't find device 'aufs'\nmodprobe: can't change directory to '/lib/modules': No such file or directory\n": exit status 1"
2020-07-13T17:20:21.409317643Z time="2020-07-13T17:20:21.402498473Z" level=warning msg="could not use snapshotter zfs in metadata plugin" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin"
2020-07-13T17:20:21.422337155Z time="2020-07-13T17:20:21.415144189Z" level=info msg="loading plugin "io.containerd.differ.v1.walking"..." type=io.containerd.differ.v1
2020-07-13T17:20:21.422366287Z time="2020-07-13T17:20:21.415177490Z" level=info msg="loading plugin "io.containerd.gc.v1.scheduler"..." type=io.containerd.gc.v1
2020-07-13T17:20:21.422370656Z time="2020-07-13T17:20:21.415220797Z" level=info msg="loading plugin "io.containerd.service.v1.containers-service"..." type=io.containerd.service.v1
2020-07-13T17:20:21.422374289Z time="2020-07-13T17:20:21.415238800Z" level=info msg="loading plugin "io.containerd.service.v1.content-service"..." type=io.containerd.service.v1
2020-07-13T17:20:21.422377575Z time="2020-07-13T17:20:21.415249745Z" level=info msg="loading plugin "io.containerd.service.v1.diff-service"..." type=io.containerd.service.v1
2020-07-13T17:20:21.422380973Z time="2020-07-13T17:20:21.415261483Z" level=info msg="loading plugin "io.containerd.service.v1.images-service"..." type=io.containerd.service.v1
2020-07-13T17:20:21.422384239Z time="2020-07-13T17:20:21.415285048Z" level=info msg="loading plugin "io.containerd.service.v1.leases-service"..." type=io.containerd.service.v1
2020-07-13T17:20:21.422387513Z time="2020-07-13T17:20:21.415296352Z" level=info msg="loading plugin "io.containerd.service.v1.namespaces-service"..." type=io.containerd.service.v1
2020-07-13T17:20:21.422390735Z time="2020-07-13T17:20:21.415306615Z" level=info msg="loading plugin "io.containerd.service.v1.snapshots-service"..." type=io.containerd.service.v1
2020-07-13T17:20:21.422393948Z time="2020-07-13T17:20:21.415317274Z" level=info msg="loading plugin "io.containerd.runtime.v1.linux"..." type=io.containerd.runtime.v1
2020-07-13T17:20:21.422397113Z time="2020-07-13T17:20:21.415539917Z" level=info msg="loading plugin "io.containerd.runtime.v2.task"..." type=io.containerd.runtime.v2
2020-07-13T17:20:21.422400411Z time="2020-07-13T17:20:21.415661653Z" level=info msg="loading plugin "io.containerd.monitor.v1.cgroups"..." type=io.containerd.monitor.v1
2020-07-13T17:20:21.422403653Z time="2020-07-13T17:20:21.416302031Z" level=info msg="loading plugin "io.containerd.service.v1.tasks-service"..." type=io.containerd.service.v1
2020-07-13T17:20:21.422406821Z time="2020-07-13T17:20:21.416331092Z" level=info msg="loading plugin "io.containerd.internal.v1.restart"..." type=io.containerd.internal.v1
2020-07-13T17:20:21.422421577Z time="2020-07-13T17:20:21.416366348Z" level=info msg="loading plugin "io.containerd.grpc.v1.containers"..." type=io.containerd.grpc.v1
2020-07-13T17:20:21.422425309Z time="2020-07-13T17:20:21.416378191Z" level=info msg="loading plugin "io.containerd.grpc.v1.content"..." type=io.containerd.grpc.v1
2020-07-13T17:20:21.422428496Z time="2020-07-13T17:20:21.416388907Z" level=info msg="loading plugin "io.containerd.grpc.v1.diff"..." type=io.containerd.grpc.v1
2020-07-13T17:20:21.422431715Z time="2020-07-13T17:20:21.416399419Z" level=info msg="loading plugin "io.containerd.grpc.v1.events"..." type=io.containerd.grpc.v1
2020-07-13T17:20:21.422434873Z time="2020-07-13T17:20:21.416410316Z" level=info msg="loading plugin "io.containerd.grpc.v1.healthcheck"..." type=io.containerd.grpc.v1
2020-07-13T17:20:21.422438168Z time="2020-07-13T17:20:21.416421136Z" level=info msg="loading plugin "io.containerd.grpc.v1.images"..." type=io.containerd.grpc.v1
2020-07-13T17:20:21.422441391Z time="2020-07-13T17:20:21.416431148Z" level=info msg="loading plugin "io.containerd.grpc.v1.leases"..." type=io.containerd.grpc.v1
2020-07-13T17:20:21.422444483Z time="2020-07-13T17:20:21.416441183Z" level=info msg="loading plugin "io.containerd.grpc.v1.namespaces"..." type=io.containerd.grpc.v1
2020-07-13T17:20:21.422447730Z time="2020-07-13T17:20:21.416450302Z" level=info msg="loading plugin "io.containerd.internal.v1.opt"..." type=io.containerd.internal.v1
2020-07-13T17:20:21.422450979Z time="2020-07-13T17:20:21.416686499Z" level=info msg="loading plugin "io.containerd.grpc.v1.snapshots"..." type=io.containerd.grpc.v1
2020-07-13T17:20:21.422454181Z time="2020-07-13T17:20:21.416699740Z" level=info msg="loading plugin "io.containerd.grpc.v1.tasks"..." type=io.containerd.grpc.v1
2020-07-13T17:20:21.422457333Z time="2020-07-13T17:20:21.416710157Z" level=info msg="loading plugin "io.containerd.grpc.v1.version"..." type=io.containerd.grpc.v1
2020-07-13T17:20:21.422460460Z time="2020-07-13T17:20:21.416720291Z" level=info msg="loading plugin "io.containerd.grpc.v1.introspection"..." type=io.containerd.grpc.v1
2020-07-13T17:20:21.422463733Z time="2020-07-13T17:20:21.417000522Z" level=info msg=serving... address="/var/run/docker/containerd/containerd-debug.sock"
2020-07-13T17:20:21.422466909Z time="2020-07-13T17:20:21.417067077Z" level=info msg=serving... address="/var/run/docker/containerd/containerd.sock"
2020-07-13T17:20:21.422470029Z time="2020-07-13T17:20:21.417076538Z" level=info msg="containerd successfully booted in 0.053678s"
2020-07-13T17:20:21.522992752Z time="2020-07-13T17:20:21.445278000Z" level=info msg="Setting the storage driver from the $DOCKER_DRIVER environment variable (overlay2)"
2020-07-13T17:20:21.523023868Z time="2020-07-13T17:20:21.445485273Z" level=info msg="parsed scheme: \"unix\"" module=grpc
2020-07-13T17:20:21.523028739Z time="2020-07-13T17:20:21.445497879Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
2020-07-13T17:20:21.523032708Z time="2020-07-13T17:20:21.445515130Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0 <nil>}] <nil>}" module=grpc
2020-07-13T17:20:21.523047388Z time="2020-07-13T17:20:21.445523771Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
2020-07-13T17:20:21.523051107Z time="2020-07-13T17:20:21.446951417Z" level=info msg="parsed scheme: \"unix\"" module=grpc
2020-07-13T17:20:21.523054385Z time="2020-07-13T17:20:21.446967002Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
2020-07-13T17:20:21.523058180Z time="2020-07-13T17:20:21.446982738Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0 <nil>}] <nil>}" module=grpc
2020-07-13T17:20:21.523061840Z time="2020-07-13T17:20:21.446991531Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
2020-07-13T17:20:21.523065052Z time="2020-07-13T17:20:21.499800243Z" level=info msg="Loading containers: start."
2020-07-13T17:20:21.529844429Z time="2020-07-13T17:20:21.527114868Z" level=warning msg="Running modprobe bridge br_netfilter failed with message: ip: can't find device 'bridge'\nbridge 167936 1 br_netfilter\nstp 16384 1 bridge\nllc 16384 2 bridge,stp\nip: can't find device 'br_netfilter'\nbr_netfilter 24576 0 \nbridge 167936 1 br_netfilter\nmodprobe: can't change directory to '/lib/modules': No such file or directory\n, error: exit status 1"
2020-07-13T17:20:21.680323559Z time="2020-07-13T17:20:21.642534057Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.18.0.0/16. Daemon option --bip can be used to set a preferred IP address"
2020-07-13T17:20:21.715621265Z time="2020-07-13T17:20:21.712210528Z" level=info msg="Loading containers: done."
2020-07-13T17:20:21.741819560Z time="2020-07-13T17:20:21.740973136Z" level=info msg="Docker daemon" commit=48a66213fe graphdriver(s)=overlay2 version=19.03.12
2020-07-13T17:20:21.743421190Z time="2020-07-13T17:20:21.741147945Z" level=info msg="Daemon has completed initialization"
2020-07-13T17:20:21.858939063Z time="2020-07-13T17:20:21.849737479Z" level=info msg="API listen on [::]:2375"
2020-07-13T17:20:21.859149575Z time="2020-07-13T17:20:21.859084136Z" level=info msg="API listen on /var/run/docker.sock"
*********
Pulling docker image docker:19.03.6 ...
Using docker image sha256:6512892b576811235f68a6dcd5fbe10b387ac0ba3709aeaf80cd5cfcecb387c7 for docker:19.03.6 ...
Preparing environment
00:03
Running on runner-fa6cab46-project-19334692-concurrent-0 via runner-fa6cab46-srm-1594660737-ea79a1df...
Getting source from Git repository
00:04
$ eval "$CI_PRE_CLONE_SCRIPT"
Fetching changes with git depth set to 50...
Initialized empty Git repository in /builds/vismarkjuarez/car-assembly-line/.git/
Created fresh repository.
Checking out 71a14f15 as master...
Skipping Git submodules setup
Executing "step_script" stage of the job script
00:27
$ docker login -u $DOCKER_USERNAME -p $DOCKER_PASSWORD
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded
$ export LOCAL_MACHINE_IP_ADDRESS=arminc-clair-db
$ ping -c 4 $LOCAL_MACHINE_IP_ADDRESS:5432
PING arminc-clair-db:5432 (172.17.0.3): 56 data bytes
64 bytes from 172.17.0.3: seq=0 ttl=64 time=0.096 ms
64 bytes from 172.17.0.3: seq=1 ttl=64 time=0.086 ms
64 bytes from 172.17.0.3: seq=2 ttl=64 time=0.079 ms
64 bytes from 172.17.0.3: seq=3 ttl=64 time=0.081 ms
--- arminc-clair-db:5432 ping statistics ---
4 packets transmitted, 4 packets received, 0% packet loss
round-trip min/avg/max = 0.079/0.085/0.096 ms
$ docker run --interactive --rm --volume "$PWD":/tmp/app -e CI_PROJECT_DIR=/tmp/app -e CLAIR_DB_CONNECTION_STRING="postgresql://postgres:password#${LOCAL_MACHINE_IP_ADDRESS}:5432/postgres?sslmode=disable&statement_timeout=60000" -e CI_APPLICATION_REPOSITORY=[MASKED]/codigo-initiative -e CI_APPLICATION_TAG=latest registry.gitlab.com/gitlab-org/security-products/analyzers/klar
Unable to find image 'registry.gitlab.com/gitlab-org/security-products/analyzers/klar:latest' locally
latest: Pulling from gitlab-org/security-products/analyzers/klar
c9b1b535fdd9: Pulling fs layer
9d4a5a860853: Pulling fs layer
f02644185e38: Pulling fs layer
01f6d8f93d4f: Pulling fs layer
6756a56563fe: Pulling fs layer
01f6d8f93d4f: Waiting
6756a56563fe: Waiting
9d4a5a860853: Verifying Checksum
9d4a5a860853: Download complete
01f6d8f93d4f: Verifying Checksum
01f6d8f93d4f: Download complete
c9b1b535fdd9: Verifying Checksum
c9b1b535fdd9: Download complete
f02644185e38: Verifying Checksum
f02644185e38: Download complete
6756a56563fe: Verifying Checksum
6756a56563fe: Download complete
c9b1b535fdd9: Pull complete
9d4a5a860853: Pull complete
f02644185e38: Pull complete
01f6d8f93d4f: Pull complete
6756a56563fe: Pull complete
Digest: sha256:229558a024e5a1c6ca5b1ed67bc13f6eeca15d4cd63c278ef6c3efa357630bde
Status: Downloaded newer image for registry.gitlab.com/gitlab-org/security-products/analyzers/klar:latest
[INFO] [klar] [2020-07-13T17:21:13Z] ▶ GitLab klar analyzer v2.4.8
[WARN] [klar] [2020-07-13T17:21:13Z] ▶ Allowlist file with path '/tmp/app/clair-whitelist.yml' does not exist, skipping
[WARN] [klar] [2020-07-13T17:21:13Z] ▶ Allowlist file with path '/tmp/app/vulnerability-allowlist.yml' does not exist, skipping
[INFO] [klar] [2020-07-13T17:21:13Z] ▶ DOCKER_USER and DOCKER_PASSWORD environment variables have not been configured. Defaulting to DOCKER_USER=$CI_REGISTRY_USER and DOCKER_PASSWORD=$CI_REGISTRY_PASSWORD
[WARN] [klar] [2020-07-13T17:21:14Z] ▶ Vulnerabilities database not ready, waiting 2s before retrying. Retry 1 of 10
[WARN] [klar] [2020-07-13T17:21:16Z] ▶ Vulnerabilities database not ready, waiting 2s before retrying. Retry 2 of 10
[WARN] [klar] [2020-07-13T17:21:18Z] ▶ Vulnerabilities database not ready, waiting 2s before retrying. Retry 3 of 10
[WARN] [klar] [2020-07-13T17:21:20Z] ▶ Vulnerabilities database not ready, waiting 2s before retrying. Retry 4 of 10
[WARN] [klar] [2020-07-13T17:21:22Z] ▶ Vulnerabilities database not ready, waiting 2s before retrying. Retry 5 of 10
[WARN] [klar] [2020-07-13T17:21:24Z] ▶ Vulnerabilities database not ready, waiting 2s before retrying. Retry 6 of 10
[WARN] [klar] [2020-07-13T17:21:26Z] ▶ Vulnerabilities database not ready, waiting 2s before retrying. Retry 7 of 10
[WARN] [klar] [2020-07-13T17:21:28Z] ▶ Vulnerabilities database not ready, waiting 2s before retrying. Retry 8 of 10
[WARN] [klar] [2020-07-13T17:21:30Z] ▶ Vulnerabilities database not ready, waiting 2s before retrying. Retry 9 of 10
[WARN] [klar] [2020-07-13T17:21:32Z] ▶ Vulnerabilities database not ready, waiting 2s before retrying. Retry 10 of 10
[FATA] [klar] [2020-07-13T17:21:34Z] ▶ error while waiting for vulnerabilities database to start. Giving up after 10 retries.: dial tcp: lookup arminc-clair-db on 169.254.169.254:53: no such host
ERROR: Job failed: exit code 1

How to use rkt as container runtime instead of docker for kubernetes?

I tried using rktlet(https://github.com/kubernetes-incubator/rktlet/blob/master/docs/getting-started-guide.md)
But when I try to
kubelet --cgroup-driver=systemd \
> --container-runtime=remote \
> --container-runtime-endpoint=/var/run/rktlet.sock \
> --image-service-endpoint=/var/run/rktlet.sock
I am getting the below errors
Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
I0320 13:10:21.661373 3116 server.go:407] Version: v1.13.4
I0320 13:10:21.663411 3116 plugins.go:103] No cloud provider specified.
W0320 13:10:21.664635 3116 server.go:552] standalone mode, no API client
W0320 13:10:21.669757 3116 server.go:464] No api server defined - no events will be sent to API server.
I0320 13:10:21.669791 3116 server.go:666] --cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /
I0320 13:10:21.670018 3116 container_manager_linux.go:248] container manager verified user specified cgroup-root exists: []
I0320 13:10:21.670038 3116 container_manager_linux.go:253] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:remote CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>} {Signal:imagefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.15} GracePeriod:0s MinReclaim:<nil>} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.1} GracePeriod:0s MinReclaim:<nil>}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms}
I0320 13:10:21.670125 3116 container_manager_linux.go:272] Creating device plugin manager: true
I0320 13:10:21.670151 3116 state_mem.go:36] [cpumanager] initializing new in-memory state store
I0320 13:10:21.670254 3116 state_mem.go:84] [cpumanager] updated default cpuset: ""
I0320 13:10:21.670271 3116 state_mem.go:92] [cpumanager] updated cpuset assignments: "map[]"
W0320 13:10:21.672059 3116 util_unix.go:77] Using "/var/run/rktlet.sock" as endpoint is deprecated, please consider using full url format "unix:///var/run/rktlet.sock".
W0320 13:10:21.672124 3116 util_unix.go:77] Using "/var/run/rktlet.sock" as endpoint is deprecated, please consider using full url format "unix:///var/run/rktlet.sock".
E0320 13:10:21.673168 3116 remote_runtime.go:72] Version from runtime service failed: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService
E0320 13:10:21.673228 3116 kuberuntime_manager.go:184] Get runtime version failed: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService
F0320 13:10:21.673249 3116 server.go:261] failed to run Kubelet: failed to create kubelet: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService
How do I create a kube cluster using rkt? Please help.
That's the way to run Rktlet. However, Rktlet is still pretty experimental and I believe it's not being actively developed either. The last commit as per this writing was in 05/2018.
You can try running it the other way as described here or here. Basically, use --container-runtime=rkt, --rkt-path=PATH_TO_RKT_BINARY, etc. on the kubelet.
Is there a reason why you are need rkt? Note that --container-runtime=rkt is deprecated in the latest Kubernetes but should still work (1.13 as of this writing).
Not sure about unknown service runtime.v1alpha2.RuntimeService but unknown service runtime.v1alpha2.ImageService in my case helps to remove "cri" from disabled_plugins in /etc/containerd/config.toml config:
#disabled_plugins = ["cri"]
disabled_plugins = []
and restart containerd service systemctl restart containerd.service
You can check ctr plugin ls output for some plugin in error state:
ctr plugin ls
TYPE ID PLATFORMS STATUS
io.containerd.content.v1 content - ok
io.containerd.snapshotter.v1 aufs linux/amd64 skip
io.containerd.snapshotter.v1 btrfs linux/amd64 skip
io.containerd.snapshotter.v1 devmapper linux/amd64 error
io.containerd.snapshotter.v1 native linux/amd64 ok
io.containerd.snapshotter.v1 overlayfs linux/amd64 ok
io.containerd.snapshotter.v1 zfs linux/amd64 skip
io.containerd.metadata.v1 bolt - ok
io.containerd.differ.v1 walking linux/amd64 ok
io.containerd.gc.v1 scheduler - ok
io.containerd.service.v1 introspection-service - ok
io.containerd.service.v1 containers-service - ok
io.containerd.service.v1 content-service - ok
io.containerd.service.v1 diff-service - ok
io.containerd.service.v1 images-service - ok
io.containerd.service.v1 leases-service - ok
io.containerd.service.v1 namespaces-service - ok
io.containerd.service.v1 snapshots-service - ok
io.containerd.runtime.v1 linux linux/amd64 ok
io.containerd.runtime.v2 task linux/amd64 ok
io.containerd.monitor.v1 cgroups linux/amd64 ok
io.containerd.service.v1 tasks-service - ok
io.containerd.internal.v1 restart - ok
io.containerd.grpc.v1 containers - ok
io.containerd.grpc.v1 content - ok
io.containerd.grpc.v1 diff - ok
io.containerd.grpc.v1 events - ok
io.containerd.grpc.v1 healthcheck - ok
io.containerd.grpc.v1 images - ok
io.containerd.grpc.v1 leases - ok
io.containerd.grpc.v1 namespaces - ok
io.containerd.internal.v1 opt - ok
io.containerd.grpc.v1 snapshots - ok
io.containerd.grpc.v1 tasks - ok
io.containerd.grpc.v1 version - ok