How to set node-exporter of Prometheus - docker-compose

How to set node-exporter of Prometheus for collecting host metrics in docker-swarm
version: '3.3'
services:
node-exporter:
image: prom/node-exporter
volumes:
- /proc:/host/proc:ro
- /sys:/host/sys:ro
- /:/rootfs:ro
command:
- '--path.procfs=/host/proc'
- '--path.sysfs=/host/sys'
- --collector.filesystem.ignored-mount-points
- "^/(sys|proc|dev|host|etc|rootfs/var/lib/docker/containers|rootfs/var/lib/docker/overlay2|rootfs/run/docker/netns|rootfs/var/lib/docker/aufs)($$|/)"
- '--collector.textfile.directory=/etc/node-exporter/'
- '--collector.enabled="conntrack,diskstats,entropy,filefd,filesystem,loadavg,mdadm,meminfo,netdev,netstat,stat,textfile,time,vmstat,ipvs"'
ports:
- 9100:9100
i am getting this error:- node_exporter: error: unknown long flag '--collector.enabled', try --help
what's wrong about last line under command section in this docker-compose file & if wrongly set/passed, how to pass it correctly.

Try to use --collector.[collector_name] (e.g. --collector.diskstats) keys instead of --collector.enabled as it does not work anymore since 0.15 version or higher.

For multiple collectors you can try as below after version "< 0.15":
--collector.processes --collector.ntp ...... so on
In the older version " > 0.15 " we were using as below for specific collectors:
--collectors.enabled meminfo,loadavg,filesystem

Related

Use Loopback + Portable Runner with docker-compose on Mac

I am trying to setup a local Beam Runner for easier testing/developing.
I'd like to allow testing python pipeline which uses kafka IO locally on my mac.
Here's my current plan for the entire framework looks like:
Here's my current docker-compose
services:
zookeeper:
image: wurstmeister/zookeeper
container_name: zookeeper
ports:
- "2181:2181"
kafka:
image: wurstmeister/kafka
container_name: kafka
environment:
KAFKA_ADVERTISED_HOST_NAME: kafka
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
ports:
- "9092:9092"
jobmanager:
image: flink_image
command: ['jobmanager']
environment:
FLINK_PROPERTIES: "jobmanager.rpc.address: jobmanager\nparallelism.default: 2"
ports:
- "8081:8081"
taskmanager:
image: flink_image
scale: 1
depends_on:
- jobmanager
command: ['taskmanager']
environment:
FLINK_PROPERTIES: "jobmanager.rpc.address: jobmanager\ntaskmanager.numberOfTaskSlots: 2\nparallelism.default: 2"
beam-jobserver:
image: flink_image
ports:
- "8097:8097"
- "8098:8098"
- "8099:8099"
entrypoint:
- java
- -cp
- /target/flink/flink-web-upload/beam-runner.jar
- org.apache.beam.runners.flink.FlinkJobServerDriver
- --flink-master=jobmanager
- --job-host=0.0.0.0
And my pipeline looks like this:
LOCAL_ARGS = [
'--streaming',
'--runner=portableRunner',
'--environment_type=LOOPBACK',
'--job_endpoint=localhost:8099',
'--artifact_endpoint=localhost:8098',
'--defaultEnvironmentType=EXTERNAL',
'--defaultEnvironmentConfig=host.docker.internal:5000',
]
with beam.Pipeline(options=PipelineOptions(LOCAL_ARGS)) as pipeline:
result = (
pipeline
| "Kafka Read" >> ReadFromKafka(
consumer_config={"bootstrap.servers": "kafka:9092", 'auto.offset.reset': 'earliest'},
topics=["test.topic"],
with_metadata=False,
expansion_service=default_io_expansion_service(
append_args=[
'--defaultEnvironmentType=PROCESS',
"--defaultEnvironmentConfig={\"command\":\"/opt/apache/beam/java_boot\"}",
'--experiments=use_deprecated_read',
]
)
)
| "logging" >> beam.Map(lambda x: logging.info(f"logged: {x}"))
)
However, it looks like the LOOPBACK tried to open a port on my host machine, and ask the task manager to talk to itself via localhost:<randomPort>. Which is not accessible inside the container.
Unfortunately, host network is not supported for Docker on Mac, and thus I need to find a way to overwrite the Loopback settings so that it connect to host.docker.internal:<dedicated_pool> instead of a random port on my host machine? or if there are other suggested workaround? Thanks!
(The entire infra can be found here: https://gist.github.com/lydian/0db7614652c2ccdc733884134bf67f9b)
It looks like this is not supported. LOOPBACK mode is mostly targeting very simple setups.
You could come close by starting the worker manually, e.g.
python -m apache_beam.runners.worker.worker_pool_main --service_port =PORT
and then passing --environment_type=EXTERNAL --environment_config= host.docker.internal:PORT.
I was just facing similar struggles recently. Luckily there's two environment variables that facilitate testing on Docker for Mac. Unfortunately, there's not much documentation around that currently.
DOCKER_MAC_CONTAINER=1 limits the ports for communication with SDK workers to the range 8100 - 8200 instead of using random ports. Ports of that range are used in a round-robin fashion and have to be published.
BEAM_WORKER_POOL_IN_DOCKER_VM=1 tells an SDK worker to communicate with a runner node using host.docker.internal / via the docker host instead of using localhost.
Here's an example how to use these with Spark, but Flink shouldn't be any different

Failed to load advanced configuration file "/etc/rabbitmq/advanced.config": unknown POSIX error

I'm getting error when configuration file is set.
My host is a Ubuntu 22.04
Inside the docker container the user is rabbitmq, using id -u rabbitmq the $UID is 999
I changed the file using: chown 999 advanced.config
But the same error still persists.
Failed to load advanced configuration file "/etc/rabbitmq/advanced.config": unknown POSIX error
Error during startup: {error,failed_to_read_advanced_configuration_file}
version: "3.2"
services:
rabbitmq2:
image: rabbitmq:3-management
hostname: rabbitmq2
container_name: 'rabbitmq2'
ports:
- "5672:5672"
- "15672:15672"
- "5552:5552"
volumes:
- ./advanced/rabbitmq2/advanced.config:/etc/rabbitmq/advanced.config
# or using:
# - type: bind
# source: $PWD/advanced/rabbitmq2/advanced.config
# target: /etc/rabbitmq/advanced.config
environment:
- RABBITMQ_ADVANCED_CONFIG_FILE=/etc/rabbitmq/advanced.config
If I use another place to put the file, or another file name, the container runs, but Rabbitmq doesn't load the configuration file.
I changed the content of the file and it didn't work (rabbitmq can't load the file), I tried using blank file, and using some configurations, for example:
[
%% 4 replicas by default, only makes sense for nine node clusters
{rabbit, [{quorum_cluster_size, 4},
{quorum_commands_soft_limit, 512}]}
]
Be sure the format is correct:
[
%% 4 replicas by default, only makes sense for nine node clusters
{rabbit, [{quorum_cluster_size, 4},
{quorum_commands_soft_limit, 512}]}
].
Note the trailing period.
NOTE: the RabbitMQ team monitors the rabbitmq-users mailing list and only sometimes answers questions on StackOverflow.

Keycloak-gatekeeper cannot decode "state" due to "illegal base64 data"

I am getting this error from keycloak-gatekeeper when trying to access protected resources
unable to decode the state parameter {"state": "8d07f10b-d096-4241-8a42-9f169de11352", "error": "illegal base64 data at input byte 8"}
Here is my docker-compose:
version: '3'
services:
keycloak-proxy:
image: "keycloak/keycloak-gatekeeper"
environment:
- PROXY_LISTEN=0.0.0.0:3000
- PROXY_DISCOVERY_URL=http://keycloak.example.com:8181/auth/realms/realmcom
- PROXY_CLIENT_ID=webapp
- PROXY_CLIENT_SECRET=0b57186c-e939-48ff-aa17-cfd3e361f65e
- PROXY_UPSTREAM_URL=http://test-server:8000
ports:
- "8282:3000"
command:
- "--verbose"
- "--enable-refresh-tokens=true"
- "--enable-default-deny=true"
- "--resources=uri=/*"
- "--enable-session-cookies=true"
- "--encryption-key=AgXa7xRcoClDEU0ZDSH4X0XhL5Qy2Z2j"
test-server:
image: "test-server"
It seems to be a bug - https://github.com/keycloak/keycloak-gatekeeper/pull/433#issuecomment-443123758. Could you please file a Jira (https://issues.jboss.org/browse/KEYCLOAK), add the affected version and steps to reproduce the issue?

Send email with Gitlab docker image

Here's my goal, I would like to configure emails for my Gitlab server. I followed a lot of tutorials but I can't make it work.
My configuration is the following, I've got a reverse-proxy in a Docker container and my Gitlab server also in a Docker container.
About versions :
Docker version 17.09.0-ce, build afdb6d4
docker-compose version 1.16.1, build 6d1ac21
Here's my docker-compose.yml file
version: '3.3'
networks:
proxy:
external: true
internal:
external: false
services:
gitlab:
image: gitlab/gitlab-ce:latest
container_name: gitlab
environment:
- TZ=Europe/Paris
- GITLAB_TIMEZONE=Paris
- IMAP_USER=USER#GMAIL.COM
- IMAP_PASSWORD=MYGMAILPASS
- GITLAB_INCOMING_EMAIL_ADDRESS=USERGMAIL+%{key}#gmail.com
volumes:
- /srv/gitlab/config:/etc/gitlab
- /srv/gitlab/logs:/var/log/gitlab
- /srv/gitlab/data:/var/opt/gitlab
restart: always
labels:
- traefik.backend=gitlab
- traefik.frontend.rule=Host:git.domain.com
- traefik.docker.network=proxy
- traefik.port=80
- traefik.frontend.entryPoints=http,https
networks:
- internal
- proxy
I followed this tutorial which seems to be good :
https://github.com/sameersbn/docker-gitlab#available-configuration-parameters
I must miss something in my configuration but I can't figure out what is it ...
Does anyone can help me to configure email sending ? I don't know either the proper way to test email sending from GitLab.
Is the best way is to configure from docker-compose environment variables or directly from gitlab.rb file ?
Some help would be much appreciated
The instructions you followed are for a different docker image than the one you're actually using. You also set up IMAP, which is for receiving emails. In GitLab's case, it's for replying to issues by email.
What you want are the SMTP settings. The GitLab docker image does not come with sendmail installed, so you will have to follow the instructions here to set up SMTP in GitLab: https://docs.gitlab.com/omnibus/settings/smtp.html#example-configuration
You can dump gitlab.rb configuration right in your docker-compose under the environment section. My Fastmail setup for reference:
environment:
GITLAB_OMNIBUS_CONFIG: |
gitlab_rails['smtp_enable'] = true
gitlab_rails['smtp_address'] = "***"
gitlab_rails['smtp_port'] = 465
gitlab_rails['smtp_user_name'] = "***"
gitlab_rails['smtp_password'] = "***"
gitlab_rails['smtp_enable_starttls_auto'] = true
gitlab_rails['smtp_tls'] = true
gitlab_rails['smtp_openssl_verify_mode'] = 'peer'

minikube --kubernetes-version URI fails. Can I use a customized localkube binary?

Can I use a custom kubernetes version in which I have made some code modifications? I wanted to use the --kubernetes-version string flag to use a customized kubernete localkube binary. It is possible??
Minikube documentation says:
--kubernetes-version string The kubernetes version that the minikube VM will use (ex: v1.2.3)
OR a URI which contains a localkube binary (ex: https://storage.googleapis.com/minikube/k8sReleases/v1.3.0/localkube-linux-amd64) (default "v1.7.5")
But even, when I try that flag with official localkube binaries, it fails:
minikube start --kubernetes-version https://storage.googleapis.com/minikube/k8sReleases/v1.7.0/localkube-linux-amd64 --v 5
Invalid Kubernetes version.
The following Kubernetes versions are available:
- v1.7.5
- v1.7.4
- v1.7.3
- v1.7.2
- v1.7.0
- v1.7.0-rc.1
- v1.7.0-alpha.2
- v1.6.4
- v1.6.3
- v1.6.0
- v1.6.0-rc.1
- v1.6.0-beta.4
- v1.6.0-beta.3
- v1.6.0-beta.2
- v1.6.0-alpha.1
- v1.6.0-alpha.0
- v1.5.3
- v1.5.2
- v1.5.1
- v1.4.5
- v1.4.3
- v1.4.2
- v1.4.1
- v1.4.0
- v1.3.7
- v1.3.6
- v1.3.5
- v1.3.4
- v1.3.3
- v1.3.0
Many thanks!
Two options come to mind:
You can launch minikube with --vm-driver=none, so the binaries are installed in your local filesystem. Then replacing the binaries should not be a difficult process.
You can create your own minikube iso and then use the --iso-url flag. In order to build the ISO, you can follow this guide https://github.com/kubernetes/minikube/blob/master/docs/contributors/minikube_iso.md